00:00:00.001 Started by upstream project "autotest-nightly" build number 3911 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.002 Started by upstream project "autotest-nightly" build number 3909 00:00:00.002 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3908 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.368 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.368 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:04.410 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.423 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.434 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:04.434 > git config core.sparsecheckout # timeout=10 00:00:04.448 > git read-tree -mu HEAD # timeout=10 00:00:04.465 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:04.493 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:04.493 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.672 [Pipeline] Start of Pipeline 00:00:04.683 [Pipeline] library 00:00:04.684 Loading library shm_lib@master 00:00:04.684 Library shm_lib@master is cached. Copying from home. 00:00:04.698 [Pipeline] node 00:00:04.705 Running on VM-host-SM0 in /var/jenkins/workspace/iscsi-vg-autotest 00:00:04.708 [Pipeline] { 00:00:04.718 [Pipeline] catchError 00:00:04.719 [Pipeline] { 00:00:04.731 [Pipeline] wrap 00:00:04.738 [Pipeline] { 00:00:04.747 [Pipeline] stage 00:00:04.749 [Pipeline] { (Prologue) 00:00:04.765 [Pipeline] echo 00:00:04.766 Node: VM-host-SM0 00:00:04.770 [Pipeline] cleanWs 00:00:04.777 [WS-CLEANUP] Deleting project workspace... 00:00:04.777 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.782 [WS-CLEANUP] done 00:00:05.028 [Pipeline] setCustomBuildProperty 00:00:05.116 [Pipeline] httpRequest 00:00:05.138 [Pipeline] echo 00:00:05.140 Sorcerer 10.211.164.101 is alive 00:00:05.147 [Pipeline] httpRequest 00:00:05.152 HttpMethod: GET 00:00:05.152 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.153 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.154 Response Code: HTTP/1.1 200 OK 00:00:05.155 Success: Status code 200 is in the accepted range: 200,404 00:00:05.155 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.668 [Pipeline] sh 00:00:05.950 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.962 [Pipeline] httpRequest 00:00:05.974 [Pipeline] echo 00:00:05.976 Sorcerer 10.211.164.101 is alive 00:00:05.983 [Pipeline] httpRequest 00:00:05.987 HttpMethod: GET 00:00:05.988 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.989 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.994 Response Code: HTTP/1.1 200 OK 00:00:05.994 Success: Status code 200 is in the accepted range: 200,404 00:00:05.995 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:03.122 [Pipeline] sh 00:01:03.401 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:06.744 [Pipeline] sh 00:01:07.023 + git -C spdk log --oneline -n5 00:01:07.023 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:07.023 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:07.023 3731556bd lvol: declare g_lvol_if static 00:01:07.023 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:07.023 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:07.043 [Pipeline] writeFile 00:01:07.059 [Pipeline] sh 00:01:07.339 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:07.350 [Pipeline] sh 00:01:07.651 + cat autorun-spdk.conf 00:01:07.651 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.651 SPDK_TEST_ISCSI_INITIATOR=1 00:01:07.651 SPDK_TEST_ISCSI=1 00:01:07.651 SPDK_TEST_RBD=1 00:01:07.651 SPDK_RUN_ASAN=1 00:01:07.651 SPDK_RUN_UBSAN=1 00:01:07.651 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:07.657 RUN_NIGHTLY=1 00:01:07.660 [Pipeline] } 00:01:07.676 [Pipeline] // stage 00:01:07.692 [Pipeline] stage 00:01:07.694 [Pipeline] { (Run VM) 00:01:07.708 [Pipeline] sh 00:01:07.982 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:07.982 + echo 'Start stage prepare_nvme.sh' 00:01:07.982 Start stage prepare_nvme.sh 00:01:07.982 + [[ -n 3 ]] 00:01:07.982 + disk_prefix=ex3 00:01:07.982 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:01:07.982 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:01:07.982 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:01:07.982 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.982 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:01:07.982 ++ SPDK_TEST_ISCSI=1 00:01:07.982 ++ SPDK_TEST_RBD=1 00:01:07.982 ++ SPDK_RUN_ASAN=1 00:01:07.982 ++ SPDK_RUN_UBSAN=1 00:01:07.982 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:07.982 ++ RUN_NIGHTLY=1 00:01:07.982 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:01:07.982 + nvme_files=() 00:01:07.982 + declare -A nvme_files 00:01:07.982 + backend_dir=/var/lib/libvirt/images/backends 00:01:07.982 + nvme_files['nvme.img']=5G 00:01:07.982 + nvme_files['nvme-cmb.img']=5G 00:01:07.982 + nvme_files['nvme-multi0.img']=4G 00:01:07.982 + nvme_files['nvme-multi1.img']=4G 00:01:07.982 + nvme_files['nvme-multi2.img']=4G 00:01:07.982 + nvme_files['nvme-openstack.img']=8G 00:01:07.982 + nvme_files['nvme-zns.img']=5G 00:01:07.982 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:07.982 + (( SPDK_TEST_FTL == 1 )) 00:01:07.982 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:07.982 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.982 + for nvme in "${!nvme_files[@]}" 00:01:07.982 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:07.982 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.240 + for nvme in "${!nvme_files[@]}" 00:01:08.240 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:08.497 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.497 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:08.497 + echo 'End stage prepare_nvme.sh' 00:01:08.497 End stage prepare_nvme.sh 00:01:08.509 [Pipeline] sh 00:01:08.789 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:08.789 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:08.789 00:01:08.789 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:01:08.789 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:01:08.789 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:01:08.789 HELP=0 00:01:08.789 DRY_RUN=0 00:01:08.789 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:08.789 NVME_DISKS_TYPE=nvme,nvme, 00:01:08.789 NVME_AUTO_CREATE=0 00:01:08.789 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:08.789 NVME_CMB=,, 00:01:08.789 NVME_PMR=,, 00:01:08.789 NVME_ZNS=,, 00:01:08.789 NVME_MS=,, 00:01:08.789 NVME_FDP=,, 00:01:08.789 SPDK_VAGRANT_DISTRO=fedora38 00:01:08.789 SPDK_VAGRANT_VMCPU=10 00:01:08.789 SPDK_VAGRANT_VMRAM=12288 00:01:08.789 SPDK_VAGRANT_PROVIDER=libvirt 00:01:08.789 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:08.789 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:08.789 SPDK_OPENSTACK_NETWORK=0 00:01:08.789 VAGRANT_PACKAGE_BOX=0 00:01:08.789 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:08.789 FORCE_DISTRO=true 00:01:08.789 VAGRANT_BOX_VERSION= 00:01:08.789 EXTRA_VAGRANTFILES= 00:01:08.789 NIC_MODEL=e1000 00:01:08.789 00:01:08.789 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:01:08.789 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:01:12.071 Bringing machine 'default' up with 'libvirt' provider... 00:01:12.638 ==> default: Creating image (snapshot of base box volume). 00:01:12.896 ==> default: Creating domain with the following settings... 00:01:12.896 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721667931_e582486a99ee8f2c1d20 00:01:12.896 ==> default: -- Domain type: kvm 00:01:12.896 ==> default: -- Cpus: 10 00:01:12.896 ==> default: -- Feature: acpi 00:01:12.896 ==> default: -- Feature: apic 00:01:12.896 ==> default: -- Feature: pae 00:01:12.896 ==> default: -- Memory: 12288M 00:01:12.896 ==> default: -- Memory Backing: hugepages: 00:01:12.896 ==> default: -- Management MAC: 00:01:12.896 ==> default: -- Loader: 00:01:12.896 ==> default: -- Nvram: 00:01:12.896 ==> default: -- Base box: spdk/fedora38 00:01:12.896 ==> default: -- Storage pool: default 00:01:12.896 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721667931_e582486a99ee8f2c1d20.img (20G) 00:01:12.896 ==> default: -- Volume Cache: default 00:01:12.896 ==> default: -- Kernel: 00:01:12.896 ==> default: -- Initrd: 00:01:12.896 ==> default: -- Graphics Type: vnc 00:01:12.896 ==> default: -- Graphics Port: -1 00:01:12.896 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.896 ==> default: -- Graphics Password: Not defined 00:01:12.896 ==> default: -- Video Type: cirrus 00:01:12.896 ==> default: -- Video VRAM: 9216 00:01:12.896 ==> default: -- Sound Type: 00:01:12.896 ==> default: -- Keymap: en-us 00:01:12.896 ==> default: -- TPM Path: 00:01:12.896 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.896 ==> default: -- Command line args: 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:12.896 ==> default: -> value=-drive, 00:01:12.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:12.896 ==> default: -> value=-drive, 00:01:12.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.896 ==> default: -> value=-drive, 00:01:12.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.896 ==> default: -> value=-drive, 00:01:12.896 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:12.896 ==> default: -> value=-device, 00:01:12.896 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.155 ==> default: Creating shared folders metadata... 00:01:13.155 ==> default: Starting domain. 00:01:14.535 ==> default: Waiting for domain to get an IP address... 00:01:32.676 ==> default: Waiting for SSH to become available... 00:01:32.676 ==> default: Configuring and enabling network interfaces... 00:01:36.010 default: SSH address: 192.168.121.95:22 00:01:36.010 default: SSH username: vagrant 00:01:36.010 default: SSH auth method: private key 00:01:37.912 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:46.016 ==> default: Mounting SSHFS shared folder... 00:01:47.391 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:47.391 ==> default: Checking Mount.. 00:01:48.326 ==> default: Folder Successfully Mounted! 00:01:48.326 ==> default: Running provisioner: file... 00:01:49.262 default: ~/.gitconfig => .gitconfig 00:01:49.829 00:01:49.829 SUCCESS! 00:01:49.829 00:01:49.829 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:49.829 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:49.829 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:49.829 00:01:49.839 [Pipeline] } 00:01:49.857 [Pipeline] // stage 00:01:49.867 [Pipeline] dir 00:01:49.868 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:01:49.870 [Pipeline] { 00:01:49.884 [Pipeline] catchError 00:01:49.886 [Pipeline] { 00:01:49.899 [Pipeline] sh 00:01:50.179 + vagrant ssh-config --host vagrant 00:01:50.179 + sed -ne /^Host/,$p 00:01:50.179 + tee ssh_conf 00:01:54.391 Host vagrant 00:01:54.391 HostName 192.168.121.95 00:01:54.391 User vagrant 00:01:54.391 Port 22 00:01:54.391 UserKnownHostsFile /dev/null 00:01:54.391 StrictHostKeyChecking no 00:01:54.391 PasswordAuthentication no 00:01:54.391 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:54.391 IdentitiesOnly yes 00:01:54.391 LogLevel FATAL 00:01:54.391 ForwardAgent yes 00:01:54.391 ForwardX11 yes 00:01:54.391 00:01:54.405 [Pipeline] withEnv 00:01:54.408 [Pipeline] { 00:01:54.469 [Pipeline] sh 00:01:54.756 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:54.756 source /etc/os-release 00:01:54.756 [[ -e /image.version ]] && img=$(< /image.version) 00:01:54.756 # Minimal, systemd-like check. 00:01:54.756 if [[ -e /.dockerenv ]]; then 00:01:54.756 # Clear garbage from the node's name: 00:01:54.756 # agt-er_autotest_547-896 -> autotest_547-896 00:01:54.756 # $HOSTNAME is the actual container id 00:01:54.756 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:54.756 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:54.756 # We can assume this is a mount from a host where container is running, 00:01:54.756 # so fetch its hostname to easily identify the target swarm worker. 00:01:54.756 container="$(< /etc/hostname) ($agent)" 00:01:54.756 else 00:01:54.756 # Fallback 00:01:54.756 container=$agent 00:01:54.756 fi 00:01:54.756 fi 00:01:54.756 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:54.756 00:01:55.026 [Pipeline] } 00:01:55.045 [Pipeline] // withEnv 00:01:55.055 [Pipeline] setCustomBuildProperty 00:01:55.070 [Pipeline] stage 00:01:55.073 [Pipeline] { (Tests) 00:01:55.091 [Pipeline] sh 00:01:55.369 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:55.642 [Pipeline] sh 00:01:55.922 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:56.195 [Pipeline] timeout 00:01:56.195 Timeout set to expire in 45 min 00:01:56.197 [Pipeline] { 00:01:56.213 [Pipeline] sh 00:01:56.492 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:57.058 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:01:57.071 [Pipeline] sh 00:01:57.401 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:57.672 [Pipeline] sh 00:01:57.951 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:58.224 [Pipeline] sh 00:01:58.503 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:01:58.503 ++ readlink -f spdk_repo 00:01:58.503 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.503 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.503 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.503 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.503 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.503 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.503 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.503 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:01:58.503 + cd /home/vagrant/spdk_repo 00:01:58.503 + source /etc/os-release 00:01:58.503 ++ NAME='Fedora Linux' 00:01:58.503 ++ VERSION='38 (Cloud Edition)' 00:01:58.503 ++ ID=fedora 00:01:58.503 ++ VERSION_ID=38 00:01:58.503 ++ VERSION_CODENAME= 00:01:58.503 ++ PLATFORM_ID=platform:f38 00:01:58.503 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:58.503 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.503 ++ LOGO=fedora-logo-icon 00:01:58.503 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:58.503 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.503 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:58.503 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.503 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.503 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.503 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:58.503 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.503 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:58.503 ++ SUPPORT_END=2024-05-14 00:01:58.503 ++ VARIANT='Cloud Edition' 00:01:58.503 ++ VARIANT_ID=cloud 00:01:58.503 + uname -a 00:01:58.503 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:58.503 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:59.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:59.068 Hugepages 00:01:59.068 node hugesize free / total 00:01:59.068 node0 1048576kB 0 / 0 00:01:59.068 node0 2048kB 0 / 0 00:01:59.068 00:01:59.068 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:59.068 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:59.068 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:59.068 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:59.068 + rm -f /tmp/spdk-ld-path 00:01:59.068 + source autorun-spdk.conf 00:01:59.068 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.068 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:01:59.068 ++ SPDK_TEST_ISCSI=1 00:01:59.068 ++ SPDK_TEST_RBD=1 00:01:59.068 ++ SPDK_RUN_ASAN=1 00:01:59.068 ++ SPDK_RUN_UBSAN=1 00:01:59.068 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.068 ++ RUN_NIGHTLY=1 00:01:59.068 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:59.068 + [[ -n '' ]] 00:01:59.068 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:59.326 + for M in /var/spdk/build-*-manifest.txt 00:01:59.327 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:59.327 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.327 + for M in /var/spdk/build-*-manifest.txt 00:01:59.327 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:59.327 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.327 ++ uname 00:01:59.327 + [[ Linux == \L\i\n\u\x ]] 00:01:59.327 + sudo dmesg -T 00:01:59.327 + sudo dmesg --clear 00:01:59.327 + dmesg_pid=5149 00:01:59.327 + sudo dmesg -Tw 00:01:59.327 + [[ Fedora Linux == FreeBSD ]] 00:01:59.327 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.327 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.327 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:59.327 + [[ -x /usr/src/fio-static/fio ]] 00:01:59.327 + export FIO_BIN=/usr/src/fio-static/fio 00:01:59.327 + FIO_BIN=/usr/src/fio-static/fio 00:01:59.327 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:59.327 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:59.327 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:59.327 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.327 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.327 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:59.327 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.327 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.327 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:59.327 Test configuration: 00:01:59.327 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.327 SPDK_TEST_ISCSI_INITIATOR=1 00:01:59.327 SPDK_TEST_ISCSI=1 00:01:59.327 SPDK_TEST_RBD=1 00:01:59.327 SPDK_RUN_ASAN=1 00:01:59.327 SPDK_RUN_UBSAN=1 00:01:59.327 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.327 RUN_NIGHTLY=1 17:06:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:59.327 17:06:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:59.327 17:06:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:59.327 17:06:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:59.327 17:06:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.327 17:06:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.327 17:06:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.327 17:06:18 -- paths/export.sh@5 -- $ export PATH 00:01:59.327 17:06:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.327 17:06:18 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:59.327 17:06:18 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:59.327 17:06:18 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721667978.XXXXXX 00:01:59.327 17:06:18 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721667978.9BcA4L 00:01:59.327 17:06:18 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:59.327 17:06:18 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:59.327 17:06:18 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:59.327 17:06:18 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:59.327 17:06:18 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:59.327 17:06:18 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:59.327 17:06:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:59.327 17:06:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.327 17:06:18 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:59.327 17:06:18 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:59.327 17:06:18 -- pm/common@17 -- $ local monitor 00:01:59.327 17:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.327 17:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.327 17:06:18 -- pm/common@25 -- $ sleep 1 00:01:59.327 17:06:18 -- pm/common@21 -- $ date +%s 00:01:59.327 17:06:18 -- pm/common@21 -- $ date +%s 00:01:59.327 17:06:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721667978 00:01:59.327 17:06:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721667978 00:01:59.584 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721667978_collect-vmstat.pm.log 00:01:59.584 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721667978_collect-cpu-load.pm.log 00:02:00.518 17:06:19 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:00.518 17:06:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:00.518 17:06:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:00.518 17:06:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:00.518 17:06:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:00.518 Mon Jul 22 05:06:19 PM UTC 2024 00:02:00.518 17:06:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:00.518 v24.09-pre-297-gf7b31b2b9 00:02:00.518 17:06:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:00.518 17:06:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:00.518 17:06:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:00.518 17:06:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:00.518 17:06:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.518 ************************************ 00:02:00.518 START TEST asan 00:02:00.518 ************************************ 00:02:00.518 using asan 00:02:00.518 17:06:19 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:00.518 00:02:00.518 real 0m0.000s 00:02:00.519 user 0m0.000s 00:02:00.519 sys 0m0.000s 00:02:00.519 17:06:19 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:00.519 17:06:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.519 ************************************ 00:02:00.519 END TEST asan 00:02:00.519 ************************************ 00:02:00.519 17:06:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:00.519 17:06:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:00.519 17:06:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:00.519 17:06:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:00.519 17:06:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:00.519 17:06:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.519 ************************************ 00:02:00.519 START TEST ubsan 00:02:00.519 ************************************ 00:02:00.519 using ubsan 00:02:00.519 17:06:19 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:00.519 00:02:00.519 real 0m0.000s 00:02:00.519 user 0m0.000s 00:02:00.519 sys 0m0.000s 00:02:00.519 17:06:19 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:00.519 17:06:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.519 ************************************ 00:02:00.519 END TEST ubsan 00:02:00.519 ************************************ 00:02:00.519 17:06:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:00.519 17:06:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.519 17:06:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.519 17:06:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.519 17:06:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:02:00.519 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:00.519 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.085 Using 'verbs' RDMA provider 00:02:14.713 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.578 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.578 Creating mk/config.mk...done. 00:02:29.578 Creating mk/cc.flags.mk...done. 00:02:29.578 Type 'make' to build. 00:02:29.578 17:06:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:29.578 17:06:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:29.578 17:06:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.578 17:06:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.578 ************************************ 00:02:29.578 START TEST make 00:02:29.578 ************************************ 00:02:29.578 17:06:46 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:29.578 make[1]: Nothing to be done for 'all'. 00:02:39.635 The Meson build system 00:02:39.635 Version: 1.3.1 00:02:39.635 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:39.635 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.635 Build type: native build 00:02:39.635 Program cat found: YES (/usr/bin/cat) 00:02:39.635 Project name: DPDK 00:02:39.635 Project version: 24.03.0 00:02:39.635 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:39.635 C linker for the host machine: cc ld.bfd 2.39-16 00:02:39.635 Host machine cpu family: x86_64 00:02:39.635 Host machine cpu: x86_64 00:02:39.635 Message: ## Building in Developer Mode ## 00:02:39.635 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.635 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.635 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.635 Program python3 found: YES (/usr/bin/python3) 00:02:39.635 Program cat found: YES (/usr/bin/cat) 00:02:39.635 Compiler for C supports arguments -march=native: YES 00:02:39.635 Checking for size of "void *" : 8 00:02:39.635 Checking for size of "void *" : 8 (cached) 00:02:39.635 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:39.635 Library m found: YES 00:02:39.635 Library numa found: YES 00:02:39.635 Has header "numaif.h" : YES 00:02:39.635 Library fdt found: NO 00:02:39.635 Library execinfo found: NO 00:02:39.635 Has header "execinfo.h" : YES 00:02:39.635 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:39.635 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.635 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.635 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.635 Run-time dependency openssl found: YES 3.0.9 00:02:39.635 Run-time dependency libpcap found: YES 1.10.4 00:02:39.635 Has header "pcap.h" with dependency libpcap: YES 00:02:39.635 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.635 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.635 Compiler for C supports arguments -Wformat: YES 00:02:39.635 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.635 Compiler for C supports arguments -Wformat-security: NO 00:02:39.635 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.635 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.635 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.635 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.635 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.635 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.635 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.635 Compiler for C supports arguments -Wundef: YES 00:02:39.635 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.635 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.635 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.635 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.635 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.635 Program objdump found: YES (/usr/bin/objdump) 00:02:39.635 Compiler for C supports arguments -mavx512f: YES 00:02:39.635 Checking if "AVX512 checking" compiles: YES 00:02:39.635 Fetching value of define "__SSE4_2__" : 1 00:02:39.635 Fetching value of define "__AES__" : 1 00:02:39.635 Fetching value of define "__AVX__" : 1 00:02:39.635 Fetching value of define "__AVX2__" : 1 00:02:39.635 Fetching value of define "__AVX512BW__" : (undefined) 00:02:39.635 Fetching value of define "__AVX512CD__" : (undefined) 00:02:39.635 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:39.635 Fetching value of define "__AVX512F__" : (undefined) 00:02:39.635 Fetching value of define "__AVX512VL__" : (undefined) 00:02:39.635 Fetching value of define "__PCLMUL__" : 1 00:02:39.635 Fetching value of define "__RDRND__" : 1 00:02:39.635 Fetching value of define "__RDSEED__" : 1 00:02:39.635 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:39.635 Fetching value of define "__znver1__" : (undefined) 00:02:39.635 Fetching value of define "__znver2__" : (undefined) 00:02:39.635 Fetching value of define "__znver3__" : (undefined) 00:02:39.635 Fetching value of define "__znver4__" : (undefined) 00:02:39.635 Library asan found: YES 00:02:39.635 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.635 Message: lib/log: Defining dependency "log" 00:02:39.635 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.635 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.635 Library rt found: YES 00:02:39.635 Checking for function "getentropy" : NO 00:02:39.635 Message: lib/eal: Defining dependency "eal" 00:02:39.635 Message: lib/ring: Defining dependency "ring" 00:02:39.635 Message: lib/rcu: Defining dependency "rcu" 00:02:39.635 Message: lib/mempool: Defining dependency "mempool" 00:02:39.635 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.635 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.635 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.635 Compiler for C supports arguments -mpclmul: YES 00:02:39.635 Compiler for C supports arguments -maes: YES 00:02:39.635 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.635 Compiler for C supports arguments -mavx512bw: YES 00:02:39.635 Compiler for C supports arguments -mavx512dq: YES 00:02:39.635 Compiler for C supports arguments -mavx512vl: YES 00:02:39.635 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.635 Compiler for C supports arguments -mavx2: YES 00:02:39.635 Compiler for C supports arguments -mavx: YES 00:02:39.635 Message: lib/net: Defining dependency "net" 00:02:39.635 Message: lib/meter: Defining dependency "meter" 00:02:39.635 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.635 Message: lib/pci: Defining dependency "pci" 00:02:39.635 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.635 Message: lib/hash: Defining dependency "hash" 00:02:39.635 Message: lib/timer: Defining dependency "timer" 00:02:39.635 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.635 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.635 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.635 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.635 Message: lib/power: Defining dependency "power" 00:02:39.635 Message: lib/reorder: Defining dependency "reorder" 00:02:39.635 Message: lib/security: Defining dependency "security" 00:02:39.635 Has header "linux/userfaultfd.h" : YES 00:02:39.635 Has header "linux/vduse.h" : YES 00:02:39.635 Message: lib/vhost: Defining dependency "vhost" 00:02:39.635 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.635 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.635 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.635 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.636 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.636 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.636 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.636 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.636 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.636 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.636 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.636 Configuring doxy-api-html.conf using configuration 00:02:39.636 Configuring doxy-api-man.conf using configuration 00:02:39.636 Program mandb found: YES (/usr/bin/mandb) 00:02:39.636 Program sphinx-build found: NO 00:02:39.636 Configuring rte_build_config.h using configuration 00:02:39.636 Message: 00:02:39.636 ================= 00:02:39.636 Applications Enabled 00:02:39.636 ================= 00:02:39.636 00:02:39.636 apps: 00:02:39.636 00:02:39.636 00:02:39.636 Message: 00:02:39.636 ================= 00:02:39.636 Libraries Enabled 00:02:39.636 ================= 00:02:39.636 00:02:39.636 libs: 00:02:39.636 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.636 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.636 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.636 00:02:39.636 Message: 00:02:39.636 =============== 00:02:39.636 Drivers Enabled 00:02:39.636 =============== 00:02:39.636 00:02:39.636 common: 00:02:39.636 00:02:39.636 bus: 00:02:39.636 pci, vdev, 00:02:39.636 mempool: 00:02:39.636 ring, 00:02:39.636 dma: 00:02:39.636 00:02:39.636 net: 00:02:39.636 00:02:39.636 crypto: 00:02:39.636 00:02:39.636 compress: 00:02:39.636 00:02:39.636 vdpa: 00:02:39.636 00:02:39.636 00:02:39.636 Message: 00:02:39.636 ================= 00:02:39.636 Content Skipped 00:02:39.636 ================= 00:02:39.636 00:02:39.636 apps: 00:02:39.636 dumpcap: explicitly disabled via build config 00:02:39.636 graph: explicitly disabled via build config 00:02:39.636 pdump: explicitly disabled via build config 00:02:39.636 proc-info: explicitly disabled via build config 00:02:39.636 test-acl: explicitly disabled via build config 00:02:39.636 test-bbdev: explicitly disabled via build config 00:02:39.636 test-cmdline: explicitly disabled via build config 00:02:39.636 test-compress-perf: explicitly disabled via build config 00:02:39.636 test-crypto-perf: explicitly disabled via build config 00:02:39.636 test-dma-perf: explicitly disabled via build config 00:02:39.636 test-eventdev: explicitly disabled via build config 00:02:39.636 test-fib: explicitly disabled via build config 00:02:39.636 test-flow-perf: explicitly disabled via build config 00:02:39.636 test-gpudev: explicitly disabled via build config 00:02:39.636 test-mldev: explicitly disabled via build config 00:02:39.636 test-pipeline: explicitly disabled via build config 00:02:39.636 test-pmd: explicitly disabled via build config 00:02:39.636 test-regex: explicitly disabled via build config 00:02:39.636 test-sad: explicitly disabled via build config 00:02:39.636 test-security-perf: explicitly disabled via build config 00:02:39.636 00:02:39.636 libs: 00:02:39.636 argparse: explicitly disabled via build config 00:02:39.636 metrics: explicitly disabled via build config 00:02:39.636 acl: explicitly disabled via build config 00:02:39.636 bbdev: explicitly disabled via build config 00:02:39.636 bitratestats: explicitly disabled via build config 00:02:39.636 bpf: explicitly disabled via build config 00:02:39.636 cfgfile: explicitly disabled via build config 00:02:39.636 distributor: explicitly disabled via build config 00:02:39.636 efd: explicitly disabled via build config 00:02:39.636 eventdev: explicitly disabled via build config 00:02:39.636 dispatcher: explicitly disabled via build config 00:02:39.636 gpudev: explicitly disabled via build config 00:02:39.636 gro: explicitly disabled via build config 00:02:39.636 gso: explicitly disabled via build config 00:02:39.636 ip_frag: explicitly disabled via build config 00:02:39.636 jobstats: explicitly disabled via build config 00:02:39.636 latencystats: explicitly disabled via build config 00:02:39.636 lpm: explicitly disabled via build config 00:02:39.636 member: explicitly disabled via build config 00:02:39.636 pcapng: explicitly disabled via build config 00:02:39.636 rawdev: explicitly disabled via build config 00:02:39.636 regexdev: explicitly disabled via build config 00:02:39.636 mldev: explicitly disabled via build config 00:02:39.636 rib: explicitly disabled via build config 00:02:39.636 sched: explicitly disabled via build config 00:02:39.636 stack: explicitly disabled via build config 00:02:39.636 ipsec: explicitly disabled via build config 00:02:39.636 pdcp: explicitly disabled via build config 00:02:39.636 fib: explicitly disabled via build config 00:02:39.636 port: explicitly disabled via build config 00:02:39.636 pdump: explicitly disabled via build config 00:02:39.636 table: explicitly disabled via build config 00:02:39.636 pipeline: explicitly disabled via build config 00:02:39.636 graph: explicitly disabled via build config 00:02:39.636 node: explicitly disabled via build config 00:02:39.636 00:02:39.636 drivers: 00:02:39.636 common/cpt: not in enabled drivers build config 00:02:39.636 common/dpaax: not in enabled drivers build config 00:02:39.636 common/iavf: not in enabled drivers build config 00:02:39.636 common/idpf: not in enabled drivers build config 00:02:39.636 common/ionic: not in enabled drivers build config 00:02:39.636 common/mvep: not in enabled drivers build config 00:02:39.636 common/octeontx: not in enabled drivers build config 00:02:39.636 bus/auxiliary: not in enabled drivers build config 00:02:39.636 bus/cdx: not in enabled drivers build config 00:02:39.636 bus/dpaa: not in enabled drivers build config 00:02:39.636 bus/fslmc: not in enabled drivers build config 00:02:39.636 bus/ifpga: not in enabled drivers build config 00:02:39.636 bus/platform: not in enabled drivers build config 00:02:39.636 bus/uacce: not in enabled drivers build config 00:02:39.636 bus/vmbus: not in enabled drivers build config 00:02:39.636 common/cnxk: not in enabled drivers build config 00:02:39.636 common/mlx5: not in enabled drivers build config 00:02:39.636 common/nfp: not in enabled drivers build config 00:02:39.636 common/nitrox: not in enabled drivers build config 00:02:39.636 common/qat: not in enabled drivers build config 00:02:39.636 common/sfc_efx: not in enabled drivers build config 00:02:39.636 mempool/bucket: not in enabled drivers build config 00:02:39.636 mempool/cnxk: not in enabled drivers build config 00:02:39.636 mempool/dpaa: not in enabled drivers build config 00:02:39.636 mempool/dpaa2: not in enabled drivers build config 00:02:39.636 mempool/octeontx: not in enabled drivers build config 00:02:39.636 mempool/stack: not in enabled drivers build config 00:02:39.636 dma/cnxk: not in enabled drivers build config 00:02:39.636 dma/dpaa: not in enabled drivers build config 00:02:39.636 dma/dpaa2: not in enabled drivers build config 00:02:39.636 dma/hisilicon: not in enabled drivers build config 00:02:39.636 dma/idxd: not in enabled drivers build config 00:02:39.636 dma/ioat: not in enabled drivers build config 00:02:39.636 dma/skeleton: not in enabled drivers build config 00:02:39.636 net/af_packet: not in enabled drivers build config 00:02:39.636 net/af_xdp: not in enabled drivers build config 00:02:39.636 net/ark: not in enabled drivers build config 00:02:39.636 net/atlantic: not in enabled drivers build config 00:02:39.636 net/avp: not in enabled drivers build config 00:02:39.636 net/axgbe: not in enabled drivers build config 00:02:39.636 net/bnx2x: not in enabled drivers build config 00:02:39.636 net/bnxt: not in enabled drivers build config 00:02:39.636 net/bonding: not in enabled drivers build config 00:02:39.636 net/cnxk: not in enabled drivers build config 00:02:39.636 net/cpfl: not in enabled drivers build config 00:02:39.636 net/cxgbe: not in enabled drivers build config 00:02:39.636 net/dpaa: not in enabled drivers build config 00:02:39.636 net/dpaa2: not in enabled drivers build config 00:02:39.636 net/e1000: not in enabled drivers build config 00:02:39.636 net/ena: not in enabled drivers build config 00:02:39.636 net/enetc: not in enabled drivers build config 00:02:39.636 net/enetfec: not in enabled drivers build config 00:02:39.636 net/enic: not in enabled drivers build config 00:02:39.636 net/failsafe: not in enabled drivers build config 00:02:39.636 net/fm10k: not in enabled drivers build config 00:02:39.636 net/gve: not in enabled drivers build config 00:02:39.636 net/hinic: not in enabled drivers build config 00:02:39.636 net/hns3: not in enabled drivers build config 00:02:39.636 net/i40e: not in enabled drivers build config 00:02:39.636 net/iavf: not in enabled drivers build config 00:02:39.636 net/ice: not in enabled drivers build config 00:02:39.636 net/idpf: not in enabled drivers build config 00:02:39.636 net/igc: not in enabled drivers build config 00:02:39.636 net/ionic: not in enabled drivers build config 00:02:39.636 net/ipn3ke: not in enabled drivers build config 00:02:39.636 net/ixgbe: not in enabled drivers build config 00:02:39.636 net/mana: not in enabled drivers build config 00:02:39.636 net/memif: not in enabled drivers build config 00:02:39.636 net/mlx4: not in enabled drivers build config 00:02:39.636 net/mlx5: not in enabled drivers build config 00:02:39.636 net/mvneta: not in enabled drivers build config 00:02:39.636 net/mvpp2: not in enabled drivers build config 00:02:39.636 net/netvsc: not in enabled drivers build config 00:02:39.636 net/nfb: not in enabled drivers build config 00:02:39.636 net/nfp: not in enabled drivers build config 00:02:39.636 net/ngbe: not in enabled drivers build config 00:02:39.636 net/null: not in enabled drivers build config 00:02:39.636 net/octeontx: not in enabled drivers build config 00:02:39.636 net/octeon_ep: not in enabled drivers build config 00:02:39.636 net/pcap: not in enabled drivers build config 00:02:39.637 net/pfe: not in enabled drivers build config 00:02:39.637 net/qede: not in enabled drivers build config 00:02:39.637 net/ring: not in enabled drivers build config 00:02:39.637 net/sfc: not in enabled drivers build config 00:02:39.637 net/softnic: not in enabled drivers build config 00:02:39.637 net/tap: not in enabled drivers build config 00:02:39.637 net/thunderx: not in enabled drivers build config 00:02:39.637 net/txgbe: not in enabled drivers build config 00:02:39.637 net/vdev_netvsc: not in enabled drivers build config 00:02:39.637 net/vhost: not in enabled drivers build config 00:02:39.637 net/virtio: not in enabled drivers build config 00:02:39.637 net/vmxnet3: not in enabled drivers build config 00:02:39.637 raw/*: missing internal dependency, "rawdev" 00:02:39.637 crypto/armv8: not in enabled drivers build config 00:02:39.637 crypto/bcmfs: not in enabled drivers build config 00:02:39.637 crypto/caam_jr: not in enabled drivers build config 00:02:39.637 crypto/ccp: not in enabled drivers build config 00:02:39.637 crypto/cnxk: not in enabled drivers build config 00:02:39.637 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.637 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.637 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.637 crypto/mlx5: not in enabled drivers build config 00:02:39.637 crypto/mvsam: not in enabled drivers build config 00:02:39.637 crypto/nitrox: not in enabled drivers build config 00:02:39.637 crypto/null: not in enabled drivers build config 00:02:39.637 crypto/octeontx: not in enabled drivers build config 00:02:39.637 crypto/openssl: not in enabled drivers build config 00:02:39.637 crypto/scheduler: not in enabled drivers build config 00:02:39.637 crypto/uadk: not in enabled drivers build config 00:02:39.637 crypto/virtio: not in enabled drivers build config 00:02:39.637 compress/isal: not in enabled drivers build config 00:02:39.637 compress/mlx5: not in enabled drivers build config 00:02:39.637 compress/nitrox: not in enabled drivers build config 00:02:39.637 compress/octeontx: not in enabled drivers build config 00:02:39.637 compress/zlib: not in enabled drivers build config 00:02:39.637 regex/*: missing internal dependency, "regexdev" 00:02:39.637 ml/*: missing internal dependency, "mldev" 00:02:39.637 vdpa/ifc: not in enabled drivers build config 00:02:39.637 vdpa/mlx5: not in enabled drivers build config 00:02:39.637 vdpa/nfp: not in enabled drivers build config 00:02:39.637 vdpa/sfc: not in enabled drivers build config 00:02:39.637 event/*: missing internal dependency, "eventdev" 00:02:39.637 baseband/*: missing internal dependency, "bbdev" 00:02:39.637 gpu/*: missing internal dependency, "gpudev" 00:02:39.637 00:02:39.637 00:02:39.637 Build targets in project: 85 00:02:39.637 00:02:39.637 DPDK 24.03.0 00:02:39.637 00:02:39.637 User defined options 00:02:39.637 buildtype : debug 00:02:39.637 default_library : shared 00:02:39.637 libdir : lib 00:02:39.637 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.637 b_sanitize : address 00:02:39.637 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.637 c_link_args : 00:02:39.637 cpu_instruction_set: native 00:02:39.637 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:39.637 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:39.637 enable_docs : false 00:02:39.637 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:39.637 enable_kmods : false 00:02:39.637 max_lcores : 128 00:02:39.637 tests : false 00:02:39.637 00:02:39.637 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.202 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:40.202 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.202 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.202 [3/268] Linking static target lib/librte_log.a 00:02:40.202 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.202 [5/268] Linking static target lib/librte_kvargs.a 00:02:40.202 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.768 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.768 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.026 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.026 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.026 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.026 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.026 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.026 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.026 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.285 [16/268] Linking static target lib/librte_telemetry.a 00:02:41.285 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.285 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.285 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.285 [20/268] Linking target lib/librte_log.so.24.1 00:02:41.544 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:41.544 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:41.802 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.802 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:42.060 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.061 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.061 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.061 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.061 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.061 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:42.061 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.322 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.322 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.322 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.322 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:42.322 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.589 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.848 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.848 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.848 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.848 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.848 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.848 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.107 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.365 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.365 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.365 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.365 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.366 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.624 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.624 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.624 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.882 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.883 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.883 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.141 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.399 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.399 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.399 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.399 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.399 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.399 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.657 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.657 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.915 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.915 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.915 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.188 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.188 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.485 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.485 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.485 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.485 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.485 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.485 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.485 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.743 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:46.002 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.002 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:46.002 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:46.261 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.261 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.261 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:46.519 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.519 [85/268] Linking static target lib/librte_ring.a 00:02:46.519 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:46.519 [87/268] Linking static target lib/librte_eal.a 00:02:46.777 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.777 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.777 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.777 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.777 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.036 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.036 [94/268] Linking static target lib/librte_rcu.a 00:02:47.036 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.036 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.036 [97/268] Linking static target lib/librte_mempool.a 00:02:47.607 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:47.607 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:47.607 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.607 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.607 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.607 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.865 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.123 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.123 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.123 [107/268] Linking static target lib/librte_net.a 00:02:48.123 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.123 [109/268] Linking static target lib/librte_meter.a 00:02:48.381 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.381 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.381 [112/268] Linking static target lib/librte_mbuf.a 00:02:48.381 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.381 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.640 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.640 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.640 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.898 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.158 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.158 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.415 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.673 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.673 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.932 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.932 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.932 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.932 [128/268] Linking static target lib/librte_pci.a 00:02:49.932 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.932 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.190 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.190 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.190 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.190 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.191 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.449 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.449 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.449 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.449 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.449 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.449 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.449 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:50.449 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.449 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.707 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.707 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.707 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.707 [148/268] Linking static target lib/librte_cmdline.a 00:02:50.965 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.223 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.223 [151/268] Linking static target lib/librte_timer.a 00:02:51.223 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.481 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.481 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.481 [155/268] Linking static target lib/librte_ethdev.a 00:02:51.481 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.481 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.739 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.997 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.997 [160/268] Linking static target lib/librte_compressdev.a 00:02:51.997 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.255 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.255 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.255 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.255 [165/268] Linking static target lib/librte_hash.a 00:02:52.255 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.527 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.527 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.527 [169/268] Linking static target lib/librte_dmadev.a 00:02:52.527 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.527 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.785 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.785 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.785 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.043 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.301 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.301 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.301 [178/268] Linking static target lib/librte_cryptodev.a 00:02:53.301 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.301 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.301 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.301 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.559 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.559 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.816 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.816 [186/268] Linking static target lib/librte_power.a 00:02:54.075 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.075 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.075 [189/268] Linking static target lib/librte_reorder.a 00:02:54.075 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.075 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.334 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.334 [193/268] Linking static target lib/librte_security.a 00:02:54.592 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.592 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.592 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.849 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.107 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.107 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.365 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.365 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.365 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.365 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.365 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.623 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.881 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.881 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.881 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.881 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.881 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.881 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.140 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.140 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.140 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.140 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:56.140 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.140 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.140 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.140 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:56.398 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.398 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:56.398 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.398 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:56.398 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.398 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.656 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:56.656 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.593 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.593 [229/268] Linking target lib/librte_eal.so.24.1 00:02:57.593 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:57.593 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.593 [232/268] Linking target lib/librte_ring.so.24.1 00:02:57.593 [233/268] Linking target lib/librte_pci.so.24.1 00:02:57.593 [234/268] Linking target lib/librte_meter.so.24.1 00:02:57.851 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:57.851 [236/268] Linking target lib/librte_timer.so.24.1 00:02:57.851 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:57.851 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:57.851 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:57.851 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:57.851 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:57.851 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:57.851 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:57.851 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:57.851 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:58.109 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:58.109 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:58.109 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:58.109 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:58.366 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:58.366 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:58.366 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:58.366 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:58.366 [254/268] Linking target lib/librte_net.so.24.1 00:02:58.624 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:58.624 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:58.624 [257/268] Linking target lib/librte_security.so.24.1 00:02:58.624 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:58.624 [259/268] Linking target lib/librte_hash.so.24.1 00:02:58.624 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.882 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:58.882 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:58.882 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.139 [264/268] Linking target lib/librte_power.so.24.1 00:03:01.669 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:01.669 [266/268] Linking static target lib/librte_vhost.a 00:03:03.588 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.588 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:03.588 INFO: autodetecting backend as ninja 00:03:03.588 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:04.523 CC lib/ut_mock/mock.o 00:03:04.523 CC lib/ut/ut.o 00:03:04.523 CC lib/log/log_flags.o 00:03:04.523 CC lib/log/log_deprecated.o 00:03:04.523 CC lib/log/log.o 00:03:04.782 LIB libspdk_ut.a 00:03:04.782 LIB libspdk_ut_mock.a 00:03:04.782 SO libspdk_ut.so.2.0 00:03:04.782 LIB libspdk_log.a 00:03:04.782 SO libspdk_ut_mock.so.6.0 00:03:04.782 SO libspdk_log.so.7.0 00:03:04.782 SYMLINK libspdk_ut.so 00:03:04.782 SYMLINK libspdk_ut_mock.so 00:03:05.040 SYMLINK libspdk_log.so 00:03:05.040 CC lib/ioat/ioat.o 00:03:05.040 CXX lib/trace_parser/trace.o 00:03:05.040 CC lib/util/bit_array.o 00:03:05.040 CC lib/dma/dma.o 00:03:05.040 CC lib/util/base64.o 00:03:05.040 CC lib/util/cpuset.o 00:03:05.040 CC lib/util/crc32.o 00:03:05.040 CC lib/util/crc16.o 00:03:05.040 CC lib/util/crc32c.o 00:03:05.298 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.298 CC lib/util/crc32_ieee.o 00:03:05.298 CC lib/util/crc64.o 00:03:05.298 LIB libspdk_dma.a 00:03:05.298 CC lib/util/dif.o 00:03:05.298 CC lib/util/fd.o 00:03:05.298 SO libspdk_dma.so.4.0 00:03:05.298 CC lib/util/fd_group.o 00:03:05.556 CC lib/util/file.o 00:03:05.556 CC lib/util/hexlify.o 00:03:05.556 SYMLINK libspdk_dma.so 00:03:05.556 CC lib/util/iov.o 00:03:05.556 LIB libspdk_ioat.a 00:03:05.556 CC lib/util/math.o 00:03:05.556 CC lib/util/net.o 00:03:05.556 SO libspdk_ioat.so.7.0 00:03:05.556 SYMLINK libspdk_ioat.so 00:03:05.556 CC lib/vfio_user/host/vfio_user.o 00:03:05.556 CC lib/util/pipe.o 00:03:05.556 CC lib/util/strerror_tls.o 00:03:05.814 CC lib/util/string.o 00:03:05.814 CC lib/util/uuid.o 00:03:05.814 CC lib/util/xor.o 00:03:05.814 CC lib/util/zipf.o 00:03:05.814 LIB libspdk_vfio_user.a 00:03:05.814 SO libspdk_vfio_user.so.5.0 00:03:06.071 SYMLINK libspdk_vfio_user.so 00:03:06.071 LIB libspdk_util.a 00:03:06.329 SO libspdk_util.so.10.0 00:03:06.329 LIB libspdk_trace_parser.a 00:03:06.329 SO libspdk_trace_parser.so.5.0 00:03:06.329 SYMLINK libspdk_util.so 00:03:06.614 SYMLINK libspdk_trace_parser.so 00:03:06.614 CC lib/rdma_utils/rdma_utils.o 00:03:06.614 CC lib/env_dpdk/env.o 00:03:06.614 CC lib/json/json_parse.o 00:03:06.614 CC lib/env_dpdk/memory.o 00:03:06.614 CC lib/env_dpdk/pci.o 00:03:06.614 CC lib/vmd/vmd.o 00:03:06.614 CC lib/vmd/led.o 00:03:06.614 CC lib/rdma_provider/common.o 00:03:06.614 CC lib/conf/conf.o 00:03:06.614 CC lib/idxd/idxd.o 00:03:06.872 CC lib/idxd/idxd_user.o 00:03:06.872 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:06.872 LIB libspdk_conf.a 00:03:06.872 CC lib/json/json_util.o 00:03:06.872 SO libspdk_conf.so.6.0 00:03:06.872 LIB libspdk_rdma_utils.a 00:03:06.872 SO libspdk_rdma_utils.so.1.0 00:03:07.129 SYMLINK libspdk_conf.so 00:03:07.129 CC lib/env_dpdk/init.o 00:03:07.129 LIB libspdk_rdma_provider.a 00:03:07.129 SYMLINK libspdk_rdma_utils.so 00:03:07.129 CC lib/idxd/idxd_kernel.o 00:03:07.129 CC lib/json/json_write.o 00:03:07.129 CC lib/env_dpdk/threads.o 00:03:07.129 SO libspdk_rdma_provider.so.6.0 00:03:07.129 SYMLINK libspdk_rdma_provider.so 00:03:07.129 CC lib/env_dpdk/pci_ioat.o 00:03:07.129 CC lib/env_dpdk/pci_virtio.o 00:03:07.129 CC lib/env_dpdk/pci_vmd.o 00:03:07.129 CC lib/env_dpdk/pci_idxd.o 00:03:07.386 CC lib/env_dpdk/pci_event.o 00:03:07.386 CC lib/env_dpdk/sigbus_handler.o 00:03:07.386 CC lib/env_dpdk/pci_dpdk.o 00:03:07.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.386 LIB libspdk_idxd.a 00:03:07.386 LIB libspdk_json.a 00:03:07.386 SO libspdk_idxd.so.12.0 00:03:07.386 SO libspdk_json.so.6.0 00:03:07.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.386 SYMLINK libspdk_idxd.so 00:03:07.645 SYMLINK libspdk_json.so 00:03:07.645 LIB libspdk_vmd.a 00:03:07.645 SO libspdk_vmd.so.6.0 00:03:07.645 SYMLINK libspdk_vmd.so 00:03:07.645 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.645 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.645 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.645 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.211 LIB libspdk_jsonrpc.a 00:03:08.211 SO libspdk_jsonrpc.so.6.0 00:03:08.211 SYMLINK libspdk_jsonrpc.so 00:03:08.469 CC lib/rpc/rpc.o 00:03:08.469 LIB libspdk_env_dpdk.a 00:03:08.733 SO libspdk_env_dpdk.so.15.0 00:03:08.733 LIB libspdk_rpc.a 00:03:08.733 SO libspdk_rpc.so.6.0 00:03:08.733 SYMLINK libspdk_env_dpdk.so 00:03:08.733 SYMLINK libspdk_rpc.so 00:03:08.996 CC lib/keyring/keyring.o 00:03:08.996 CC lib/keyring/keyring_rpc.o 00:03:08.996 CC lib/notify/notify_rpc.o 00:03:08.996 CC lib/notify/notify.o 00:03:08.996 CC lib/trace/trace.o 00:03:08.996 CC lib/trace/trace_flags.o 00:03:08.996 CC lib/trace/trace_rpc.o 00:03:09.253 LIB libspdk_notify.a 00:03:09.253 SO libspdk_notify.so.6.0 00:03:09.511 SYMLINK libspdk_notify.so 00:03:09.511 LIB libspdk_trace.a 00:03:09.511 LIB libspdk_keyring.a 00:03:09.511 SO libspdk_trace.so.10.0 00:03:09.511 SO libspdk_keyring.so.1.0 00:03:09.511 SYMLINK libspdk_trace.so 00:03:09.511 SYMLINK libspdk_keyring.so 00:03:09.769 CC lib/thread/thread.o 00:03:09.769 CC lib/sock/sock.o 00:03:09.769 CC lib/thread/iobuf.o 00:03:09.769 CC lib/sock/sock_rpc.o 00:03:10.336 LIB libspdk_sock.a 00:03:10.336 SO libspdk_sock.so.10.0 00:03:10.336 SYMLINK libspdk_sock.so 00:03:10.595 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.595 CC lib/nvme/nvme_fabric.o 00:03:10.595 CC lib/nvme/nvme_ns_cmd.o 00:03:10.595 CC lib/nvme/nvme_ctrlr.o 00:03:10.595 CC lib/nvme/nvme_ns.o 00:03:10.595 CC lib/nvme/nvme_qpair.o 00:03:10.595 CC lib/nvme/nvme_pcie_common.o 00:03:10.595 CC lib/nvme/nvme_pcie.o 00:03:10.595 CC lib/nvme/nvme.o 00:03:11.529 CC lib/nvme/nvme_quirks.o 00:03:11.529 CC lib/nvme/nvme_transport.o 00:03:11.529 CC lib/nvme/nvme_discovery.o 00:03:11.529 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.788 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.788 CC lib/nvme/nvme_tcp.o 00:03:11.788 CC lib/nvme/nvme_opal.o 00:03:11.788 LIB libspdk_thread.a 00:03:11.788 SO libspdk_thread.so.10.1 00:03:12.050 CC lib/nvme/nvme_io_msg.o 00:03:12.050 SYMLINK libspdk_thread.so 00:03:12.050 CC lib/nvme/nvme_poll_group.o 00:03:12.050 CC lib/nvme/nvme_zns.o 00:03:12.308 CC lib/nvme/nvme_stubs.o 00:03:12.308 CC lib/accel/accel.o 00:03:12.308 CC lib/accel/accel_rpc.o 00:03:12.308 CC lib/accel/accel_sw.o 00:03:12.566 CC lib/nvme/nvme_auth.o 00:03:12.566 CC lib/nvme/nvme_cuse.o 00:03:12.566 CC lib/nvme/nvme_rdma.o 00:03:12.825 CC lib/init/subsystem.o 00:03:12.825 CC lib/init/json_config.o 00:03:12.825 CC lib/blob/blobstore.o 00:03:12.825 CC lib/virtio/virtio.o 00:03:13.083 CC lib/init/subsystem_rpc.o 00:03:13.083 CC lib/blob/request.o 00:03:13.341 CC lib/init/rpc.o 00:03:13.341 CC lib/virtio/virtio_vhost_user.o 00:03:13.599 LIB libspdk_init.a 00:03:13.599 CC lib/virtio/virtio_vfio_user.o 00:03:13.599 SO libspdk_init.so.5.0 00:03:13.599 LIB libspdk_accel.a 00:03:13.599 CC lib/blob/zeroes.o 00:03:13.599 SYMLINK libspdk_init.so 00:03:13.599 CC lib/blob/blob_bs_dev.o 00:03:13.599 CC lib/virtio/virtio_pci.o 00:03:13.599 SO libspdk_accel.so.16.0 00:03:13.858 SYMLINK libspdk_accel.so 00:03:13.858 CC lib/event/app.o 00:03:13.858 CC lib/event/reactor.o 00:03:13.858 CC lib/event/log_rpc.o 00:03:13.858 CC lib/event/app_rpc.o 00:03:13.858 CC lib/event/scheduler_static.o 00:03:14.115 CC lib/bdev/bdev.o 00:03:14.115 CC lib/bdev/bdev_rpc.o 00:03:14.115 LIB libspdk_virtio.a 00:03:14.115 CC lib/bdev/bdev_zone.o 00:03:14.115 SO libspdk_virtio.so.7.0 00:03:14.374 SYMLINK libspdk_virtio.so 00:03:14.374 CC lib/bdev/part.o 00:03:14.374 CC lib/bdev/scsi_nvme.o 00:03:14.374 LIB libspdk_nvme.a 00:03:14.632 LIB libspdk_event.a 00:03:14.632 SO libspdk_nvme.so.13.1 00:03:14.632 SO libspdk_event.so.14.0 00:03:14.632 SYMLINK libspdk_event.so 00:03:14.889 SYMLINK libspdk_nvme.so 00:03:17.440 LIB libspdk_blob.a 00:03:17.440 SO libspdk_blob.so.11.0 00:03:17.440 SYMLINK libspdk_blob.so 00:03:17.440 LIB libspdk_bdev.a 00:03:17.698 SO libspdk_bdev.so.16.0 00:03:17.698 CC lib/lvol/lvol.o 00:03:17.698 CC lib/blobfs/blobfs.o 00:03:17.698 CC lib/blobfs/tree.o 00:03:17.698 SYMLINK libspdk_bdev.so 00:03:17.956 CC lib/ftl/ftl_core.o 00:03:17.956 CC lib/ftl/ftl_init.o 00:03:17.956 CC lib/ftl/ftl_layout.o 00:03:17.956 CC lib/ftl/ftl_debug.o 00:03:17.956 CC lib/ublk/ublk.o 00:03:17.956 CC lib/scsi/dev.o 00:03:17.956 CC lib/nbd/nbd.o 00:03:17.956 CC lib/nvmf/ctrlr.o 00:03:18.214 CC lib/ftl/ftl_io.o 00:03:18.214 CC lib/scsi/lun.o 00:03:18.214 CC lib/ftl/ftl_sb.o 00:03:18.214 CC lib/ftl/ftl_l2p.o 00:03:18.473 CC lib/ublk/ublk_rpc.o 00:03:18.473 CC lib/scsi/port.o 00:03:18.473 CC lib/scsi/scsi.o 00:03:18.473 CC lib/ftl/ftl_l2p_flat.o 00:03:18.473 CC lib/nbd/nbd_rpc.o 00:03:18.473 CC lib/ftl/ftl_nv_cache.o 00:03:18.473 CC lib/nvmf/ctrlr_discovery.o 00:03:18.731 CC lib/scsi/scsi_bdev.o 00:03:18.731 CC lib/nvmf/ctrlr_bdev.o 00:03:18.731 LIB libspdk_nbd.a 00:03:18.731 LIB libspdk_ublk.a 00:03:18.731 CC lib/nvmf/subsystem.o 00:03:18.731 SO libspdk_nbd.so.7.0 00:03:18.731 SO libspdk_ublk.so.3.0 00:03:18.731 SYMLINK libspdk_nbd.so 00:03:18.731 CC lib/nvmf/nvmf.o 00:03:18.989 LIB libspdk_blobfs.a 00:03:18.989 SYMLINK libspdk_ublk.so 00:03:18.989 CC lib/ftl/ftl_band.o 00:03:18.989 LIB libspdk_lvol.a 00:03:18.989 SO libspdk_blobfs.so.10.0 00:03:18.989 SO libspdk_lvol.so.10.0 00:03:18.989 SYMLINK libspdk_blobfs.so 00:03:18.989 CC lib/ftl/ftl_band_ops.o 00:03:18.989 SYMLINK libspdk_lvol.so 00:03:18.989 CC lib/ftl/ftl_writer.o 00:03:19.261 CC lib/scsi/scsi_pr.o 00:03:19.261 CC lib/nvmf/nvmf_rpc.o 00:03:19.261 CC lib/ftl/ftl_rq.o 00:03:19.261 CC lib/scsi/scsi_rpc.o 00:03:19.531 CC lib/ftl/ftl_reloc.o 00:03:19.531 CC lib/ftl/ftl_l2p_cache.o 00:03:19.531 CC lib/ftl/ftl_p2l.o 00:03:19.531 CC lib/ftl/mngt/ftl_mngt.o 00:03:19.531 CC lib/scsi/task.o 00:03:19.788 LIB libspdk_scsi.a 00:03:19.789 CC lib/nvmf/transport.o 00:03:20.047 SO libspdk_scsi.so.9.0 00:03:20.047 CC lib/nvmf/tcp.o 00:03:20.047 CC lib/nvmf/stubs.o 00:03:20.047 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.047 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.047 SYMLINK libspdk_scsi.so 00:03:20.305 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.305 CC lib/iscsi/conn.o 00:03:20.305 CC lib/iscsi/init_grp.o 00:03:20.305 CC lib/iscsi/iscsi.o 00:03:20.305 CC lib/nvmf/mdns_server.o 00:03:20.305 CC lib/nvmf/rdma.o 00:03:20.305 CC lib/vhost/vhost.o 00:03:20.305 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.564 CC lib/nvmf/auth.o 00:03:20.564 CC lib/iscsi/md5.o 00:03:20.821 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.821 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:20.821 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:20.821 CC lib/vhost/vhost_rpc.o 00:03:21.079 CC lib/iscsi/param.o 00:03:21.079 CC lib/iscsi/portal_grp.o 00:03:21.079 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.079 CC lib/vhost/vhost_scsi.o 00:03:21.079 CC lib/iscsi/tgt_node.o 00:03:21.337 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.337 CC lib/iscsi/iscsi_subsystem.o 00:03:21.337 CC lib/iscsi/iscsi_rpc.o 00:03:21.638 CC lib/iscsi/task.o 00:03:21.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.638 CC lib/vhost/vhost_blk.o 00:03:21.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.638 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.896 CC lib/ftl/utils/ftl_conf.o 00:03:21.896 CC lib/vhost/rte_vhost_user.o 00:03:21.896 CC lib/ftl/utils/ftl_md.o 00:03:21.896 CC lib/ftl/utils/ftl_mempool.o 00:03:21.896 CC lib/ftl/utils/ftl_bitmap.o 00:03:22.153 LIB libspdk_iscsi.a 00:03:22.153 CC lib/ftl/utils/ftl_property.o 00:03:22.153 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:22.153 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:22.153 SO libspdk_iscsi.so.8.0 00:03:22.153 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:22.153 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:22.410 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:22.410 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:22.410 SYMLINK libspdk_iscsi.so 00:03:22.410 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:22.410 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:22.410 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:22.410 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:22.668 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:22.668 CC lib/ftl/base/ftl_base_dev.o 00:03:22.668 CC lib/ftl/base/ftl_base_bdev.o 00:03:22.668 CC lib/ftl/ftl_trace.o 00:03:22.927 LIB libspdk_ftl.a 00:03:23.186 LIB libspdk_vhost.a 00:03:23.186 SO libspdk_ftl.so.9.0 00:03:23.186 SO libspdk_vhost.so.8.0 00:03:23.444 LIB libspdk_nvmf.a 00:03:23.444 SYMLINK libspdk_vhost.so 00:03:23.444 SO libspdk_nvmf.so.19.0 00:03:23.702 SYMLINK libspdk_ftl.so 00:03:23.702 SYMLINK libspdk_nvmf.so 00:03:24.293 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.293 CC module/blob/bdev/blob_bdev.o 00:03:24.293 CC module/accel/error/accel_error.o 00:03:24.293 CC module/accel/dsa/accel_dsa.o 00:03:24.293 CC module/keyring/file/keyring.o 00:03:24.293 CC module/sock/posix/posix.o 00:03:24.293 CC module/accel/iaa/accel_iaa.o 00:03:24.293 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.293 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.293 CC module/accel/ioat/accel_ioat.o 00:03:24.293 LIB libspdk_env_dpdk_rpc.a 00:03:24.293 SO libspdk_env_dpdk_rpc.so.6.0 00:03:24.293 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.293 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.558 CC module/accel/error/accel_error_rpc.o 00:03:24.558 CC module/keyring/file/keyring_rpc.o 00:03:24.558 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.558 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.558 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:24.558 LIB libspdk_scheduler_dynamic.a 00:03:24.558 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.558 SO libspdk_scheduler_dynamic.so.4.0 00:03:24.558 LIB libspdk_accel_iaa.a 00:03:24.558 LIB libspdk_blob_bdev.a 00:03:24.558 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:24.558 SO libspdk_accel_iaa.so.3.0 00:03:24.558 SO libspdk_blob_bdev.so.11.0 00:03:24.558 SYMLINK libspdk_scheduler_dynamic.so 00:03:24.558 LIB libspdk_accel_error.a 00:03:24.558 LIB libspdk_keyring_file.a 00:03:24.558 SO libspdk_accel_error.so.2.0 00:03:24.558 LIB libspdk_accel_ioat.a 00:03:24.558 SYMLINK libspdk_blob_bdev.so 00:03:24.558 SO libspdk_keyring_file.so.1.0 00:03:24.558 SYMLINK libspdk_accel_iaa.so 00:03:24.558 SO libspdk_accel_ioat.so.6.0 00:03:24.817 LIB libspdk_accel_dsa.a 00:03:24.817 SYMLINK libspdk_accel_error.so 00:03:24.817 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.817 SYMLINK libspdk_keyring_file.so 00:03:24.817 SO libspdk_accel_dsa.so.5.0 00:03:24.817 SYMLINK libspdk_accel_ioat.so 00:03:24.817 SYMLINK libspdk_accel_dsa.so 00:03:24.817 CC module/keyring/linux/keyring.o 00:03:24.817 CC module/keyring/linux/keyring_rpc.o 00:03:24.817 LIB libspdk_scheduler_gscheduler.a 00:03:24.817 SO libspdk_scheduler_gscheduler.so.4.0 00:03:25.075 CC module/bdev/error/vbdev_error.o 00:03:25.075 CC module/bdev/malloc/bdev_malloc.o 00:03:25.075 CC module/bdev/gpt/gpt.o 00:03:25.075 CC module/bdev/delay/vbdev_delay.o 00:03:25.075 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.075 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.075 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.075 LIB libspdk_keyring_linux.a 00:03:25.075 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.075 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.075 SO libspdk_keyring_linux.so.1.0 00:03:25.075 SYMLINK libspdk_keyring_linux.so 00:03:25.075 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.075 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.075 LIB libspdk_sock_posix.a 00:03:25.075 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.333 SO libspdk_sock_posix.so.6.0 00:03:25.333 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.333 SYMLINK libspdk_sock_posix.so 00:03:25.333 LIB libspdk_blobfs_bdev.a 00:03:25.333 LIB libspdk_bdev_delay.a 00:03:25.333 CC module/bdev/null/bdev_null.o 00:03:25.333 SO libspdk_blobfs_bdev.so.6.0 00:03:25.333 LIB libspdk_bdev_malloc.a 00:03:25.333 LIB libspdk_bdev_gpt.a 00:03:25.333 SO libspdk_bdev_delay.so.6.0 00:03:25.591 LIB libspdk_bdev_error.a 00:03:25.591 SO libspdk_bdev_malloc.so.6.0 00:03:25.591 SO libspdk_bdev_gpt.so.6.0 00:03:25.591 CC module/bdev/nvme/bdev_nvme.o 00:03:25.591 SYMLINK libspdk_blobfs_bdev.so 00:03:25.591 SYMLINK libspdk_bdev_delay.so 00:03:25.591 SO libspdk_bdev_error.so.6.0 00:03:25.591 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.591 SYMLINK libspdk_bdev_malloc.so 00:03:25.591 SYMLINK libspdk_bdev_gpt.so 00:03:25.591 CC module/bdev/null/bdev_null_rpc.o 00:03:25.591 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.591 LIB libspdk_bdev_lvol.a 00:03:25.591 SYMLINK libspdk_bdev_error.so 00:03:25.591 CC module/bdev/raid/bdev_raid.o 00:03:25.591 SO libspdk_bdev_lvol.so.6.0 00:03:25.591 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:25.591 CC module/bdev/split/vbdev_split.o 00:03:25.849 CC module/bdev/split/vbdev_split_rpc.o 00:03:25.850 SYMLINK libspdk_bdev_lvol.so 00:03:25.850 LIB libspdk_bdev_null.a 00:03:25.850 CC module/bdev/aio/bdev_aio.o 00:03:25.850 SO libspdk_bdev_null.so.6.0 00:03:25.850 SYMLINK libspdk_bdev_null.so 00:03:25.850 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:25.850 CC module/bdev/ftl/bdev_ftl.o 00:03:25.850 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.850 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.109 LIB libspdk_bdev_split.a 00:03:26.109 SO libspdk_bdev_split.so.6.0 00:03:26.109 LIB libspdk_bdev_zone_block.a 00:03:26.109 LIB libspdk_bdev_passthru.a 00:03:26.109 SO libspdk_bdev_zone_block.so.6.0 00:03:26.109 SO libspdk_bdev_passthru.so.6.0 00:03:26.109 SYMLINK libspdk_bdev_split.so 00:03:26.109 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.109 SYMLINK libspdk_bdev_zone_block.so 00:03:26.109 CC module/bdev/nvme/nvme_rpc.o 00:03:26.109 SYMLINK libspdk_bdev_passthru.so 00:03:26.109 CC module/bdev/nvme/bdev_mdns_client.o 00:03:26.367 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.367 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.367 LIB libspdk_bdev_aio.a 00:03:26.367 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.367 SO libspdk_bdev_aio.so.6.0 00:03:26.367 CC module/bdev/rbd/bdev_rbd.o 00:03:26.367 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.367 CC module/bdev/raid/raid0.o 00:03:26.367 SYMLINK libspdk_bdev_aio.so 00:03:26.367 CC module/bdev/raid/raid1.o 00:03:26.367 LIB libspdk_bdev_ftl.a 00:03:26.367 CC module/bdev/raid/concat.o 00:03:26.625 SO libspdk_bdev_ftl.so.6.0 00:03:26.625 SYMLINK libspdk_bdev_ftl.so 00:03:26.625 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.625 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.625 CC module/bdev/rbd/bdev_rbd_rpc.o 00:03:26.625 CC module/bdev/nvme/vbdev_opal.o 00:03:26.625 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.883 LIB libspdk_bdev_iscsi.a 00:03:26.883 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.883 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:26.883 SO libspdk_bdev_iscsi.so.6.0 00:03:26.883 LIB libspdk_bdev_raid.a 00:03:26.883 LIB libspdk_bdev_rbd.a 00:03:26.883 SYMLINK libspdk_bdev_iscsi.so 00:03:26.883 SO libspdk_bdev_raid.so.6.0 00:03:26.883 SO libspdk_bdev_rbd.so.7.0 00:03:27.239 SYMLINK libspdk_bdev_rbd.so 00:03:27.239 LIB libspdk_bdev_virtio.a 00:03:27.239 SYMLINK libspdk_bdev_raid.so 00:03:27.239 SO libspdk_bdev_virtio.so.6.0 00:03:27.239 SYMLINK libspdk_bdev_virtio.so 00:03:28.633 LIB libspdk_bdev_nvme.a 00:03:28.633 SO libspdk_bdev_nvme.so.7.0 00:03:28.633 SYMLINK libspdk_bdev_nvme.so 00:03:29.198 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:29.198 CC module/event/subsystems/keyring/keyring.o 00:03:29.198 CC module/event/subsystems/sock/sock.o 00:03:29.198 CC module/event/subsystems/scheduler/scheduler.o 00:03:29.198 CC module/event/subsystems/vmd/vmd.o 00:03:29.198 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:29.198 CC module/event/subsystems/iobuf/iobuf.o 00:03:29.198 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:29.456 LIB libspdk_event_keyring.a 00:03:29.456 LIB libspdk_event_scheduler.a 00:03:29.456 LIB libspdk_event_vhost_blk.a 00:03:29.456 SO libspdk_event_keyring.so.1.0 00:03:29.456 SO libspdk_event_scheduler.so.4.0 00:03:29.456 LIB libspdk_event_sock.a 00:03:29.456 SO libspdk_event_vhost_blk.so.3.0 00:03:29.456 LIB libspdk_event_iobuf.a 00:03:29.456 LIB libspdk_event_vmd.a 00:03:29.456 SO libspdk_event_sock.so.5.0 00:03:29.456 SO libspdk_event_iobuf.so.3.0 00:03:29.456 SO libspdk_event_vmd.so.6.0 00:03:29.456 SYMLINK libspdk_event_keyring.so 00:03:29.456 SYMLINK libspdk_event_scheduler.so 00:03:29.456 SYMLINK libspdk_event_vhost_blk.so 00:03:29.456 SYMLINK libspdk_event_sock.so 00:03:29.456 SYMLINK libspdk_event_vmd.so 00:03:29.456 SYMLINK libspdk_event_iobuf.so 00:03:29.714 CC module/event/subsystems/accel/accel.o 00:03:29.971 LIB libspdk_event_accel.a 00:03:29.971 SO libspdk_event_accel.so.6.0 00:03:29.971 SYMLINK libspdk_event_accel.so 00:03:30.230 CC module/event/subsystems/bdev/bdev.o 00:03:30.488 LIB libspdk_event_bdev.a 00:03:30.488 SO libspdk_event_bdev.so.6.0 00:03:30.746 SYMLINK libspdk_event_bdev.so 00:03:30.746 CC module/event/subsystems/scsi/scsi.o 00:03:30.746 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.746 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.746 CC module/event/subsystems/ublk/ublk.o 00:03:30.746 CC module/event/subsystems/nbd/nbd.o 00:03:31.004 LIB libspdk_event_ublk.a 00:03:31.004 LIB libspdk_event_nbd.a 00:03:31.004 LIB libspdk_event_scsi.a 00:03:31.004 SO libspdk_event_nbd.so.6.0 00:03:31.004 SO libspdk_event_ublk.so.3.0 00:03:31.004 SO libspdk_event_scsi.so.6.0 00:03:31.261 SYMLINK libspdk_event_nbd.so 00:03:31.261 SYMLINK libspdk_event_ublk.so 00:03:31.261 LIB libspdk_event_nvmf.a 00:03:31.261 SYMLINK libspdk_event_scsi.so 00:03:31.261 SO libspdk_event_nvmf.so.6.0 00:03:31.261 SYMLINK libspdk_event_nvmf.so 00:03:31.519 CC module/event/subsystems/iscsi/iscsi.o 00:03:31.519 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:31.519 LIB libspdk_event_vhost_scsi.a 00:03:31.519 LIB libspdk_event_iscsi.a 00:03:31.519 SO libspdk_event_vhost_scsi.so.3.0 00:03:31.776 SO libspdk_event_iscsi.so.6.0 00:03:31.776 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.776 SYMLINK libspdk_event_iscsi.so 00:03:31.776 SO libspdk.so.6.0 00:03:31.776 SYMLINK libspdk.so 00:03:32.034 TEST_HEADER include/spdk/accel.h 00:03:32.034 TEST_HEADER include/spdk/accel_module.h 00:03:32.034 TEST_HEADER include/spdk/assert.h 00:03:32.034 TEST_HEADER include/spdk/barrier.h 00:03:32.034 CXX app/trace/trace.o 00:03:32.034 TEST_HEADER include/spdk/base64.h 00:03:32.034 CC app/trace_record/trace_record.o 00:03:32.034 TEST_HEADER include/spdk/bdev.h 00:03:32.034 TEST_HEADER include/spdk/bdev_module.h 00:03:32.034 TEST_HEADER include/spdk/bdev_zone.h 00:03:32.034 TEST_HEADER include/spdk/bit_array.h 00:03:32.034 TEST_HEADER include/spdk/bit_pool.h 00:03:32.034 TEST_HEADER include/spdk/blob_bdev.h 00:03:32.034 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:32.291 TEST_HEADER include/spdk/blobfs.h 00:03:32.291 TEST_HEADER include/spdk/blob.h 00:03:32.291 TEST_HEADER include/spdk/conf.h 00:03:32.291 TEST_HEADER include/spdk/config.h 00:03:32.291 TEST_HEADER include/spdk/cpuset.h 00:03:32.291 TEST_HEADER include/spdk/crc16.h 00:03:32.291 CC app/nvmf_tgt/nvmf_main.o 00:03:32.291 TEST_HEADER include/spdk/crc32.h 00:03:32.291 TEST_HEADER include/spdk/crc64.h 00:03:32.291 TEST_HEADER include/spdk/dif.h 00:03:32.291 TEST_HEADER include/spdk/dma.h 00:03:32.291 CC app/iscsi_tgt/iscsi_tgt.o 00:03:32.291 TEST_HEADER include/spdk/endian.h 00:03:32.291 TEST_HEADER include/spdk/env_dpdk.h 00:03:32.291 TEST_HEADER include/spdk/env.h 00:03:32.291 TEST_HEADER include/spdk/event.h 00:03:32.291 TEST_HEADER include/spdk/fd_group.h 00:03:32.291 TEST_HEADER include/spdk/fd.h 00:03:32.291 TEST_HEADER include/spdk/file.h 00:03:32.291 TEST_HEADER include/spdk/ftl.h 00:03:32.291 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.291 TEST_HEADER include/spdk/hexlify.h 00:03:32.291 TEST_HEADER include/spdk/histogram_data.h 00:03:32.291 TEST_HEADER include/spdk/idxd.h 00:03:32.291 TEST_HEADER include/spdk/idxd_spec.h 00:03:32.291 TEST_HEADER include/spdk/init.h 00:03:32.291 TEST_HEADER include/spdk/ioat.h 00:03:32.291 TEST_HEADER include/spdk/ioat_spec.h 00:03:32.291 TEST_HEADER include/spdk/iscsi_spec.h 00:03:32.291 CC test/thread/poller_perf/poller_perf.o 00:03:32.291 TEST_HEADER include/spdk/json.h 00:03:32.291 TEST_HEADER include/spdk/jsonrpc.h 00:03:32.291 TEST_HEADER include/spdk/keyring.h 00:03:32.291 TEST_HEADER include/spdk/keyring_module.h 00:03:32.291 TEST_HEADER include/spdk/likely.h 00:03:32.291 TEST_HEADER include/spdk/log.h 00:03:32.291 TEST_HEADER include/spdk/lvol.h 00:03:32.291 TEST_HEADER include/spdk/memory.h 00:03:32.291 CC examples/util/zipf/zipf.o 00:03:32.291 TEST_HEADER include/spdk/mmio.h 00:03:32.291 TEST_HEADER include/spdk/nbd.h 00:03:32.291 TEST_HEADER include/spdk/net.h 00:03:32.291 TEST_HEADER include/spdk/notify.h 00:03:32.291 TEST_HEADER include/spdk/nvme.h 00:03:32.291 TEST_HEADER include/spdk/nvme_intel.h 00:03:32.291 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:32.291 CC test/app/bdev_svc/bdev_svc.o 00:03:32.291 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:32.291 TEST_HEADER include/spdk/nvme_spec.h 00:03:32.291 CC test/dma/test_dma/test_dma.o 00:03:32.292 TEST_HEADER include/spdk/nvme_zns.h 00:03:32.292 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:32.292 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:32.292 TEST_HEADER include/spdk/nvmf.h 00:03:32.292 TEST_HEADER include/spdk/nvmf_spec.h 00:03:32.292 TEST_HEADER include/spdk/nvmf_transport.h 00:03:32.292 TEST_HEADER include/spdk/opal.h 00:03:32.292 TEST_HEADER include/spdk/opal_spec.h 00:03:32.292 TEST_HEADER include/spdk/pci_ids.h 00:03:32.292 TEST_HEADER include/spdk/pipe.h 00:03:32.292 TEST_HEADER include/spdk/queue.h 00:03:32.292 TEST_HEADER include/spdk/reduce.h 00:03:32.292 TEST_HEADER include/spdk/rpc.h 00:03:32.292 TEST_HEADER include/spdk/scheduler.h 00:03:32.292 TEST_HEADER include/spdk/scsi.h 00:03:32.292 TEST_HEADER include/spdk/scsi_spec.h 00:03:32.292 TEST_HEADER include/spdk/sock.h 00:03:32.292 TEST_HEADER include/spdk/stdinc.h 00:03:32.292 TEST_HEADER include/spdk/string.h 00:03:32.292 TEST_HEADER include/spdk/thread.h 00:03:32.292 TEST_HEADER include/spdk/trace.h 00:03:32.292 TEST_HEADER include/spdk/trace_parser.h 00:03:32.292 TEST_HEADER include/spdk/tree.h 00:03:32.292 TEST_HEADER include/spdk/ublk.h 00:03:32.292 TEST_HEADER include/spdk/util.h 00:03:32.292 TEST_HEADER include/spdk/uuid.h 00:03:32.292 TEST_HEADER include/spdk/version.h 00:03:32.292 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.292 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.292 TEST_HEADER include/spdk/vhost.h 00:03:32.292 TEST_HEADER include/spdk/vmd.h 00:03:32.292 TEST_HEADER include/spdk/xor.h 00:03:32.292 TEST_HEADER include/spdk/zipf.h 00:03:32.292 CXX test/cpp_headers/accel.o 00:03:32.292 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.549 LINK poller_perf 00:03:32.549 LINK spdk_trace_record 00:03:32.549 LINK iscsi_tgt 00:03:32.549 LINK nvmf_tgt 00:03:32.549 LINK zipf 00:03:32.549 LINK bdev_svc 00:03:32.549 CXX test/cpp_headers/accel_module.o 00:03:32.549 LINK spdk_trace 00:03:32.806 CC test/rpc_client/rpc_client_test.o 00:03:32.806 CXX test/cpp_headers/assert.o 00:03:32.806 LINK test_dma 00:03:32.806 CC test/event/event_perf/event_perf.o 00:03:32.806 CC test/event/reactor/reactor.o 00:03:33.064 LINK rpc_client_test 00:03:33.064 CC examples/ioat/perf/perf.o 00:03:33.064 LINK event_perf 00:03:33.064 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.064 CC test/app/histogram_perf/histogram_perf.o 00:03:33.064 LINK mem_callbacks 00:03:33.064 LINK reactor 00:03:33.064 CXX test/cpp_headers/barrier.o 00:03:33.321 CC app/spdk_tgt/spdk_tgt.o 00:03:33.321 LINK histogram_perf 00:03:33.321 CC test/app/jsoncat/jsoncat.o 00:03:33.321 CC test/app/stub/stub.o 00:03:33.321 LINK ioat_perf 00:03:33.321 CXX test/cpp_headers/base64.o 00:03:33.321 CC test/env/vtophys/vtophys.o 00:03:33.321 CC test/event/reactor_perf/reactor_perf.o 00:03:33.321 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.321 CXX test/cpp_headers/bdev.o 00:03:33.579 LINK stub 00:03:33.579 LINK jsoncat 00:03:33.579 LINK spdk_tgt 00:03:33.579 LINK reactor_perf 00:03:33.579 LINK vtophys 00:03:33.579 CC examples/ioat/verify/verify.o 00:03:33.579 LINK nvme_fuzz 00:03:33.579 LINK env_dpdk_post_init 00:03:33.837 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.837 CXX test/cpp_headers/bdev_module.o 00:03:33.837 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.837 CC test/event/app_repeat/app_repeat.o 00:03:33.837 LINK verify 00:03:33.837 CC test/env/memory/memory_ut.o 00:03:33.837 CC app/spdk_lspci/spdk_lspci.o 00:03:33.837 CC test/event/scheduler/scheduler.o 00:03:33.837 CC app/spdk_nvme_perf/perf.o 00:03:33.837 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.095 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.095 LINK app_repeat 00:03:34.095 CXX test/cpp_headers/bdev_zone.o 00:03:34.095 LINK spdk_lspci 00:03:34.095 CC examples/vmd/led/led.o 00:03:34.095 LINK lsvmd 00:03:34.095 LINK scheduler 00:03:34.095 CXX test/cpp_headers/bit_array.o 00:03:34.353 LINK led 00:03:34.353 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.353 CC examples/idxd/perf/perf.o 00:03:34.353 CXX test/cpp_headers/bit_pool.o 00:03:34.353 LINK vhost_fuzz 00:03:34.611 CC examples/thread/thread/thread_ex.o 00:03:34.611 CXX test/cpp_headers/blob_bdev.o 00:03:34.611 LINK interrupt_tgt 00:03:34.611 CC test/accel/dif/dif.o 00:03:34.611 CC examples/sock/hello_world/hello_sock.o 00:03:34.869 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.869 LINK idxd_perf 00:03:34.869 LINK thread 00:03:34.869 CC test/blobfs/mkfs/mkfs.o 00:03:35.126 CXX test/cpp_headers/blobfs.o 00:03:35.126 LINK hello_sock 00:03:35.126 CC app/spdk_nvme_identify/identify.o 00:03:35.126 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.126 LINK spdk_nvme_perf 00:03:35.126 LINK mkfs 00:03:35.126 LINK memory_ut 00:03:35.126 CXX test/cpp_headers/blob.o 00:03:35.384 CC app/spdk_top/spdk_top.o 00:03:35.384 LINK dif 00:03:35.384 CXX test/cpp_headers/conf.o 00:03:35.384 LINK spdk_nvme_discover 00:03:35.384 CXX test/cpp_headers/config.o 00:03:35.384 CC examples/accel/perf/accel_perf.o 00:03:35.710 CC test/env/pci/pci_ut.o 00:03:35.710 CXX test/cpp_headers/cpuset.o 00:03:35.710 CC examples/blob/hello_world/hello_blob.o 00:03:35.710 CC examples/blob/cli/blobcli.o 00:03:35.710 CC examples/nvme/hello_world/hello_world.o 00:03:35.710 CXX test/cpp_headers/crc16.o 00:03:35.968 CC examples/nvme/reconnect/reconnect.o 00:03:35.968 CXX test/cpp_headers/crc32.o 00:03:35.968 LINK hello_blob 00:03:35.968 LINK hello_world 00:03:35.968 LINK pci_ut 00:03:35.968 LINK iscsi_fuzz 00:03:36.226 CXX test/cpp_headers/crc64.o 00:03:36.226 LINK accel_perf 00:03:36.226 CXX test/cpp_headers/dif.o 00:03:36.226 LINK spdk_nvme_identify 00:03:36.226 LINK reconnect 00:03:36.226 CXX test/cpp_headers/dma.o 00:03:36.484 LINK blobcli 00:03:36.484 CXX test/cpp_headers/endian.o 00:03:36.484 CC app/vhost/vhost.o 00:03:36.484 LINK spdk_top 00:03:36.484 CXX test/cpp_headers/env_dpdk.o 00:03:36.484 CXX test/cpp_headers/env.o 00:03:36.484 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.484 CC examples/nvme/arbitration/arbitration.o 00:03:36.742 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.742 CC test/lvol/esnap/esnap.o 00:03:36.742 CXX test/cpp_headers/event.o 00:03:36.742 CXX test/cpp_headers/fd_group.o 00:03:36.742 LINK vhost 00:03:36.742 CXX test/cpp_headers/fd.o 00:03:36.742 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.742 CC app/spdk_dd/spdk_dd.o 00:03:36.742 CXX test/cpp_headers/file.o 00:03:36.742 CXX test/cpp_headers/ftl.o 00:03:36.999 CXX test/cpp_headers/gpt_spec.o 00:03:36.999 LINK arbitration 00:03:36.999 CXX test/cpp_headers/hexlify.o 00:03:36.999 LINK hello_bdev 00:03:36.999 CXX test/cpp_headers/histogram_data.o 00:03:36.999 CC examples/nvme/hotplug/hotplug.o 00:03:36.999 CXX test/cpp_headers/idxd.o 00:03:36.999 CXX test/cpp_headers/idxd_spec.o 00:03:36.999 LINK nvme_manage 00:03:37.257 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.257 CXX test/cpp_headers/init.o 00:03:37.257 LINK spdk_dd 00:03:37.257 CXX test/cpp_headers/ioat.o 00:03:37.257 LINK cmb_copy 00:03:37.257 LINK hotplug 00:03:37.515 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.515 CC examples/nvme/abort/abort.o 00:03:37.515 CC app/fio/nvme/fio_plugin.o 00:03:37.515 CXX test/cpp_headers/ioat_spec.o 00:03:37.515 CXX test/cpp_headers/iscsi_spec.o 00:03:37.515 CC app/fio/bdev/fio_plugin.o 00:03:37.515 CXX test/cpp_headers/json.o 00:03:37.515 LINK pmr_persistence 00:03:37.773 CXX test/cpp_headers/jsonrpc.o 00:03:37.773 CXX test/cpp_headers/keyring.o 00:03:37.773 CC test/nvme/aer/aer.o 00:03:37.773 LINK bdevperf 00:03:37.773 CXX test/cpp_headers/keyring_module.o 00:03:37.773 LINK abort 00:03:37.773 CC test/bdev/bdevio/bdevio.o 00:03:38.031 CC test/nvme/reset/reset.o 00:03:38.031 CXX test/cpp_headers/likely.o 00:03:38.031 CC test/nvme/sgl/sgl.o 00:03:38.031 LINK aer 00:03:38.031 CC test/nvme/e2edp/nvme_dp.o 00:03:38.031 CXX test/cpp_headers/log.o 00:03:38.031 LINK spdk_bdev 00:03:38.312 LINK spdk_nvme 00:03:38.312 LINK reset 00:03:38.312 CC examples/nvmf/nvmf/nvmf.o 00:03:38.312 CXX test/cpp_headers/lvol.o 00:03:38.312 LINK bdevio 00:03:38.312 CC test/nvme/overhead/overhead.o 00:03:38.312 LINK sgl 00:03:38.312 CC test/nvme/err_injection/err_injection.o 00:03:38.312 CC test/nvme/startup/startup.o 00:03:38.570 LINK nvme_dp 00:03:38.570 CXX test/cpp_headers/memory.o 00:03:38.570 CC test/nvme/reserve/reserve.o 00:03:38.570 LINK err_injection 00:03:38.570 CXX test/cpp_headers/mmio.o 00:03:38.570 LINK startup 00:03:38.570 LINK nvmf 00:03:38.570 CC test/nvme/simple_copy/simple_copy.o 00:03:38.570 CXX test/cpp_headers/nbd.o 00:03:38.828 CXX test/cpp_headers/net.o 00:03:38.828 LINK overhead 00:03:38.828 LINK reserve 00:03:38.828 CC test/nvme/connect_stress/connect_stress.o 00:03:38.828 CC test/nvme/boot_partition/boot_partition.o 00:03:38.828 CC test/nvme/compliance/nvme_compliance.o 00:03:38.828 CXX test/cpp_headers/notify.o 00:03:38.828 CXX test/cpp_headers/nvme.o 00:03:38.828 LINK simple_copy 00:03:39.086 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.086 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.086 LINK connect_stress 00:03:39.086 LINK boot_partition 00:03:39.086 CC test/nvme/fdp/fdp.o 00:03:39.086 CXX test/cpp_headers/nvme_intel.o 00:03:39.086 CC test/nvme/cuse/cuse.o 00:03:39.086 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.344 LINK fused_ordering 00:03:39.344 LINK doorbell_aers 00:03:39.344 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.344 CXX test/cpp_headers/nvme_spec.o 00:03:39.344 LINK nvme_compliance 00:03:39.344 CXX test/cpp_headers/nvme_zns.o 00:03:39.344 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.344 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.344 CXX test/cpp_headers/nvmf.o 00:03:39.602 CXX test/cpp_headers/nvmf_spec.o 00:03:39.602 CXX test/cpp_headers/nvmf_transport.o 00:03:39.602 LINK fdp 00:03:39.602 CXX test/cpp_headers/opal.o 00:03:39.602 CXX test/cpp_headers/opal_spec.o 00:03:39.602 CXX test/cpp_headers/pci_ids.o 00:03:39.602 CXX test/cpp_headers/pipe.o 00:03:39.602 CXX test/cpp_headers/queue.o 00:03:39.602 CXX test/cpp_headers/reduce.o 00:03:39.602 CXX test/cpp_headers/rpc.o 00:03:39.602 CXX test/cpp_headers/scheduler.o 00:03:39.860 CXX test/cpp_headers/scsi.o 00:03:39.860 CXX test/cpp_headers/sock.o 00:03:39.860 CXX test/cpp_headers/scsi_spec.o 00:03:39.860 CXX test/cpp_headers/stdinc.o 00:03:39.860 CXX test/cpp_headers/string.o 00:03:39.860 CXX test/cpp_headers/thread.o 00:03:39.860 CXX test/cpp_headers/trace.o 00:03:39.860 CXX test/cpp_headers/trace_parser.o 00:03:39.860 CXX test/cpp_headers/tree.o 00:03:39.860 CXX test/cpp_headers/ublk.o 00:03:40.118 CXX test/cpp_headers/util.o 00:03:40.118 CXX test/cpp_headers/uuid.o 00:03:40.118 CXX test/cpp_headers/version.o 00:03:40.118 CXX test/cpp_headers/vfio_user_pci.o 00:03:40.118 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.118 CXX test/cpp_headers/vhost.o 00:03:40.118 CXX test/cpp_headers/vmd.o 00:03:40.118 CXX test/cpp_headers/xor.o 00:03:40.118 CXX test/cpp_headers/zipf.o 00:03:41.052 LINK cuse 00:03:43.580 LINK esnap 00:03:44.145 00:03:44.145 real 1m16.223s 00:03:44.145 user 7m18.785s 00:03:44.145 sys 1m37.797s 00:03:44.145 17:08:03 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:44.145 17:08:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:44.145 ************************************ 00:03:44.145 END TEST make 00:03:44.145 ************************************ 00:03:44.145 17:08:03 -- common/autotest_common.sh@1142 -- $ return 0 00:03:44.145 17:08:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:44.145 17:08:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:44.145 17:08:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:44.145 17:08:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.145 17:08:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:44.145 17:08:03 -- pm/common@44 -- $ pid=5184 00:03:44.145 17:08:03 -- pm/common@50 -- $ kill -TERM 5184 00:03:44.145 17:08:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.145 17:08:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:44.145 17:08:03 -- pm/common@44 -- $ pid=5186 00:03:44.145 17:08:03 -- pm/common@50 -- $ kill -TERM 5186 00:03:44.404 17:08:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:44.404 17:08:03 -- nvmf/common.sh@7 -- # uname -s 00:03:44.404 17:08:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.404 17:08:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.404 17:08:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.404 17:08:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.404 17:08:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.404 17:08:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.404 17:08:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.404 17:08:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.404 17:08:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.404 17:08:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.404 17:08:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c61f564-1952-48f3-b7d3-94aa342140a5 00:03:44.404 17:08:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5c61f564-1952-48f3-b7d3-94aa342140a5 00:03:44.404 17:08:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.404 17:08:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.404 17:08:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.404 17:08:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.404 17:08:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:44.404 17:08:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.404 17:08:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.404 17:08:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.404 17:08:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.404 17:08:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.404 17:08:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.404 17:08:03 -- paths/export.sh@5 -- # export PATH 00:03:44.404 17:08:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.404 17:08:03 -- nvmf/common.sh@47 -- # : 0 00:03:44.404 17:08:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:44.404 17:08:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:44.404 17:08:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.404 17:08:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.404 17:08:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.404 17:08:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:44.404 17:08:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:44.404 17:08:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:44.404 17:08:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.404 17:08:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.404 17:08:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.404 17:08:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.404 17:08:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.404 17:08:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.404 17:08:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.404 17:08:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.404 17:08:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.404 17:08:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.404 17:08:03 -- spdk/autotest.sh@48 -- # udevadm_pid=52918 00:03:44.404 17:08:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.404 17:08:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.404 17:08:03 -- pm/common@17 -- # local monitor 00:03:44.404 17:08:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.404 17:08:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.404 17:08:03 -- pm/common@25 -- # sleep 1 00:03:44.404 17:08:03 -- pm/common@21 -- # date +%s 00:03:44.404 17:08:03 -- pm/common@21 -- # date +%s 00:03:44.404 17:08:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721668083 00:03:44.404 17:08:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721668083 00:03:44.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721668083_collect-vmstat.pm.log 00:03:44.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721668083_collect-cpu-load.pm.log 00:03:45.339 17:08:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.339 17:08:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.339 17:08:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.339 17:08:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.339 17:08:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.339 17:08:04 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:45.339 17:08:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.339 17:08:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.339 17:08:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.339 17:08:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.339 17:08:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.339 17:08:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.339 17:08:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.339 17:08:04 -- common/autotest_common.sh@1455 -- # uname 00:03:45.339 17:08:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:45.339 17:08:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.339 17:08:04 -- common/autotest_common.sh@1475 -- # uname 00:03:45.339 17:08:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:45.339 17:08:04 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:45.597 17:08:04 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:45.597 17:08:04 -- spdk/autotest.sh@72 -- # hash lcov 00:03:45.597 17:08:04 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:45.597 17:08:04 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:45.597 --rc lcov_branch_coverage=1 00:03:45.597 --rc lcov_function_coverage=1 00:03:45.597 --rc genhtml_branch_coverage=1 00:03:45.597 --rc genhtml_function_coverage=1 00:03:45.597 --rc genhtml_legend=1 00:03:45.597 --rc geninfo_all_blocks=1 00:03:45.597 ' 00:03:45.597 17:08:04 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:45.597 --rc lcov_branch_coverage=1 00:03:45.597 --rc lcov_function_coverage=1 00:03:45.597 --rc genhtml_branch_coverage=1 00:03:45.597 --rc genhtml_function_coverage=1 00:03:45.597 --rc genhtml_legend=1 00:03:45.597 --rc geninfo_all_blocks=1 00:03:45.597 ' 00:03:45.597 17:08:04 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:45.597 --rc lcov_branch_coverage=1 00:03:45.597 --rc lcov_function_coverage=1 00:03:45.597 --rc genhtml_branch_coverage=1 00:03:45.597 --rc genhtml_function_coverage=1 00:03:45.597 --rc genhtml_legend=1 00:03:45.597 --rc geninfo_all_blocks=1 00:03:45.597 --no-external' 00:03:45.597 17:08:04 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:45.597 --rc lcov_branch_coverage=1 00:03:45.597 --rc lcov_function_coverage=1 00:03:45.597 --rc genhtml_branch_coverage=1 00:03:45.597 --rc genhtml_function_coverage=1 00:03:45.597 --rc genhtml_legend=1 00:03:45.597 --rc geninfo_all_blocks=1 00:03:45.597 --no-external' 00:03:45.597 17:08:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:45.597 lcov: LCOV version 1.14 00:03:45.597 17:08:04 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:00.541 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:00.541 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:15.419 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:15.419 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:15.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:15.420 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:15.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:17.950 17:08:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:17.950 17:08:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.950 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:17.950 17:08:36 -- spdk/autotest.sh@91 -- # rm -f 00:04:17.950 17:08:36 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.466 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:18.466 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:18.466 17:08:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:18.466 17:08:37 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:18.466 17:08:37 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:18.466 17:08:37 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:18.466 17:08:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.466 17:08:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:18.466 17:08:37 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:18.466 17:08:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.466 17:08:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:18.466 17:08:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:18.466 17:08:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.466 17:08:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:18.466 17:08:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:18.466 17:08:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.466 17:08:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:18.466 17:08:37 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:18.466 17:08:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:18.466 17:08:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.466 17:08:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:18.466 17:08:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.466 17:08:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.466 17:08:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:18.466 17:08:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:18.466 17:08:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.466 No valid GPT data, bailing 00:04:18.466 17:08:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.466 17:08:37 -- scripts/common.sh@391 -- # pt= 00:04:18.466 17:08:37 -- scripts/common.sh@392 -- # return 1 00:04:18.466 17:08:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.466 1+0 records in 00:04:18.466 1+0 records out 00:04:18.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475999 s, 220 MB/s 00:04:18.466 17:08:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.466 17:08:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.466 17:08:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:18.466 17:08:37 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:18.466 17:08:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:18.466 No valid GPT data, bailing 00:04:18.466 17:08:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.466 17:08:37 -- scripts/common.sh@391 -- # pt= 00:04:18.466 17:08:37 -- scripts/common.sh@392 -- # return 1 00:04:18.466 17:08:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:18.466 1+0 records in 00:04:18.466 1+0 records out 00:04:18.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490815 s, 214 MB/s 00:04:18.466 17:08:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.466 17:08:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.466 17:08:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:18.466 17:08:37 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:18.466 17:08:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:18.724 No valid GPT data, bailing 00:04:18.724 17:08:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:18.724 17:08:37 -- scripts/common.sh@391 -- # pt= 00:04:18.724 17:08:37 -- scripts/common.sh@392 -- # return 1 00:04:18.724 17:08:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:18.724 1+0 records in 00:04:18.724 1+0 records out 00:04:18.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522559 s, 201 MB/s 00:04:18.724 17:08:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.724 17:08:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:18.724 17:08:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:18.724 17:08:37 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:18.724 17:08:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:18.724 No valid GPT data, bailing 00:04:18.724 17:08:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:18.724 17:08:37 -- scripts/common.sh@391 -- # pt= 00:04:18.724 17:08:37 -- scripts/common.sh@392 -- # return 1 00:04:18.724 17:08:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:18.724 1+0 records in 00:04:18.724 1+0 records out 00:04:18.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497804 s, 211 MB/s 00:04:18.724 17:08:37 -- spdk/autotest.sh@118 -- # sync 00:04:18.724 17:08:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.724 17:08:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.724 17:08:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.664 17:08:39 -- spdk/autotest.sh@124 -- # uname -s 00:04:20.664 17:08:39 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:20.664 17:08:39 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.664 17:08:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.664 17:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.664 17:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:20.664 ************************************ 00:04:20.664 START TEST setup.sh 00:04:20.664 ************************************ 00:04:20.664 17:08:39 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:20.664 * Looking for test storage... 00:04:20.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.664 17:08:39 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:20.664 17:08:39 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:20.664 17:08:39 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.664 17:08:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.664 17:08:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.664 17:08:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.664 ************************************ 00:04:20.664 START TEST acl 00:04:20.664 ************************************ 00:04:20.664 17:08:39 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:20.664 * Looking for test storage... 00:04:20.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.922 17:08:39 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.922 17:08:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.923 17:08:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:20.923 17:08:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:20.923 17:08:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:20.923 17:08:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.923 17:08:39 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:20.923 17:08:39 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:20.923 17:08:39 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:20.923 17:08:39 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:20.923 17:08:39 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:20.923 17:08:39 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.923 17:08:39 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.489 17:08:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:21.489 17:08:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:21.489 17:08:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.489 17:08:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:21.489 17:08:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.489 17:08:40 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 Hugepages 00:04:22.426 node hugesize free / total 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 00:04:22.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:22.426 17:08:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:22.426 17:08:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.426 17:08:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.426 17:08:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.426 ************************************ 00:04:22.426 START TEST denied 00:04:22.426 ************************************ 00:04:22.426 17:08:41 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:22.426 17:08:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:22.426 17:08:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:22.426 17:08:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:22.426 17:08:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.426 17:08:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.361 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.361 17:08:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.927 00:04:23.927 real 0m1.438s 00:04:23.927 user 0m0.570s 00:04:23.927 sys 0m0.827s 00:04:23.927 17:08:42 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.927 ************************************ 00:04:23.927 END TEST denied 00:04:23.927 ************************************ 00:04:23.927 17:08:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:23.927 17:08:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:23.927 17:08:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:23.927 17:08:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.927 17:08:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.927 17:08:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.927 ************************************ 00:04:23.927 START TEST allowed 00:04:23.927 ************************************ 00:04:23.927 17:08:42 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:23.927 17:08:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:23.927 17:08:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:23.927 17:08:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:23.927 17:08:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.927 17:08:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.861 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.861 17:08:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.428 00:04:25.428 real 0m1.548s 00:04:25.428 user 0m0.693s 00:04:25.428 sys 0m0.849s 00:04:25.428 17:08:44 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.428 17:08:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:25.428 ************************************ 00:04:25.428 END TEST allowed 00:04:25.428 ************************************ 00:04:25.428 17:08:44 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.428 00:04:25.428 real 0m4.833s 00:04:25.428 user 0m2.152s 00:04:25.428 sys 0m2.644s 00:04:25.428 ************************************ 00:04:25.428 END TEST acl 00:04:25.428 ************************************ 00:04:25.428 17:08:44 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.428 17:08:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.688 17:08:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.688 17:08:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.688 17:08:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.688 17:08:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.688 17:08:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.688 ************************************ 00:04:25.688 START TEST hugepages 00:04:25.688 ************************************ 00:04:25.688 17:08:44 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:25.688 * Looking for test storage... 00:04:25.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5860160 kB' 'MemAvailable: 7416340 kB' 'Buffers: 2436 kB' 'Cached: 1770204 kB' 'SwapCached: 0 kB' 'Active: 435132 kB' 'Inactive: 1442068 kB' 'Active(anon): 115048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 106196 kB' 'Mapped: 48600 kB' 'Shmem: 10488 kB' 'KReclaimable: 61932 kB' 'Slab: 132916 kB' 'SReclaimable: 61932 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6300 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.688 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.689 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.690 17:08:44 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:25.690 17:08:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.690 17:08:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.690 17:08:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.690 ************************************ 00:04:25.690 START TEST default_setup 00:04:25.690 ************************************ 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.690 17:08:44 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.518 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.518 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7877676 kB' 'MemAvailable: 9433768 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 452172 kB' 'Inactive: 1442072 kB' 'Active(anon): 132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123028 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132736 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 70988 kB' 'KernelStack: 6256 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.518 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.519 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7877676 kB' 'MemAvailable: 9433768 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451952 kB' 'Inactive: 1442072 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132736 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 70988 kB' 'KernelStack: 6208 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.520 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.521 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7877676 kB' 'MemAvailable: 9433768 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451984 kB' 'Inactive: 1442072 kB' 'Active(anon): 131900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123032 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132732 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6208 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.522 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.784 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:26.785 nr_hugepages=1024 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.785 resv_hugepages=0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.785 surplus_hugepages=0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.785 anon_hugepages=0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7877676 kB' 'MemAvailable: 9433768 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451884 kB' 'Inactive: 1442072 kB' 'Active(anon): 131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132732 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6192 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.785 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.786 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7877676 kB' 'MemUsed: 4364296 kB' 'SwapCached: 0 kB' 'Active: 451508 kB' 'Inactive: 1442072 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1772624 kB' 'Mapped: 48952 kB' 'AnonPages: 122612 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132732 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 70984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.787 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.788 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.789 node0=1024 expecting 1024 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.789 00:04:26.789 real 0m0.974s 00:04:26.789 user 0m0.450s 00:04:26.789 sys 0m0.479s 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.789 17:08:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:26.789 ************************************ 00:04:26.789 END TEST default_setup 00:04:26.789 ************************************ 00:04:26.789 17:08:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.789 17:08:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.789 17:08:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.789 17:08:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.789 17:08:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.789 ************************************ 00:04:26.789 START TEST per_node_1G_alloc 00:04:26.789 ************************************ 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.789 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.049 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.049 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925056 kB' 'MemAvailable: 10481152 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 452292 kB' 'Inactive: 1442076 kB' 'Active(anon): 132208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48884 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132864 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71116 kB' 'KernelStack: 6212 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.049 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.050 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.051 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.321 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.321 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.322 17:08:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925056 kB' 'MemAvailable: 10481152 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451728 kB' 'Inactive: 1442076 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122800 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132868 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71120 kB' 'KernelStack: 6224 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.322 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.323 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925056 kB' 'MemAvailable: 10481152 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 452012 kB' 'Inactive: 1442076 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132864 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71116 kB' 'KernelStack: 6192 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.324 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.325 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.326 nr_hugepages=512 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.326 resv_hugepages=0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.326 surplus_hugepages=0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.326 anon_hugepages=0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925056 kB' 'MemAvailable: 10481152 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 452040 kB' 'Inactive: 1442076 kB' 'Active(anon): 131956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132860 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71112 kB' 'KernelStack: 6224 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.326 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.327 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925056 kB' 'MemUsed: 3316916 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1442072 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1772620 kB' 'Mapped: 48692 kB' 'AnonPages: 122920 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132872 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.328 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.329 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.330 node0=512 expecting 512 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.330 00:04:27.330 real 0m0.550s 00:04:27.330 user 0m0.272s 00:04:27.330 sys 0m0.284s 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.330 17:08:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.330 ************************************ 00:04:27.330 END TEST per_node_1G_alloc 00:04:27.330 ************************************ 00:04:27.330 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.330 17:08:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:27.330 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.330 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.330 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.330 ************************************ 00:04:27.330 START TEST even_2G_alloc 00:04:27.330 ************************************ 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.330 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.588 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.588 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7882864 kB' 'MemAvailable: 9438960 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451892 kB' 'Inactive: 1442076 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123200 kB' 'Mapped: 48952 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132896 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71148 kB' 'KernelStack: 6180 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.852 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.853 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7882944 kB' 'MemAvailable: 9439044 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451872 kB' 'Inactive: 1442080 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132896 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71148 kB' 'KernelStack: 6164 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.854 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.855 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883424 kB' 'MemAvailable: 9439524 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451752 kB' 'Inactive: 1442080 kB' 'Active(anon): 131668 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123072 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132880 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71132 kB' 'KernelStack: 6192 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.856 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.857 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.858 nr_hugepages=1024 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.858 resv_hugepages=0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.858 surplus_hugepages=0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.858 anon_hugepages=0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.858 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883424 kB' 'MemAvailable: 9439524 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451724 kB' 'Inactive: 1442080 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132880 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71132 kB' 'KernelStack: 6224 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.859 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:27.860 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883424 kB' 'MemUsed: 4358548 kB' 'SwapCached: 0 kB' 'Active: 451668 kB' 'Inactive: 1442080 kB' 'Active(anon): 131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1772628 kB' 'Mapped: 48692 kB' 'AnonPages: 122972 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132880 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.861 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.862 node0=1024 expecting 1024 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.862 00:04:27.862 real 0m0.529s 00:04:27.862 user 0m0.277s 00:04:27.862 sys 0m0.286s 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.862 17:08:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.862 ************************************ 00:04:27.862 END TEST even_2G_alloc 00:04:27.862 ************************************ 00:04:27.862 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.862 17:08:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:27.862 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.862 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.862 17:08:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.862 ************************************ 00:04:27.862 START TEST odd_alloc 00:04:27.862 ************************************ 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:27.862 17:08:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:27.863 17:08:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.863 17:08:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.434 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.434 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7895484 kB' 'MemAvailable: 9451580 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 452016 kB' 'Inactive: 1442076 kB' 'Active(anon): 131932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132864 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71116 kB' 'KernelStack: 6212 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:28.434 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.435 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7896156 kB' 'MemAvailable: 9452252 kB' 'Buffers: 2436 kB' 'Cached: 1770188 kB' 'SwapCached: 0 kB' 'Active: 451792 kB' 'Inactive: 1442076 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132856 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71108 kB' 'KernelStack: 6164 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.436 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.437 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7897300 kB' 'MemAvailable: 9453400 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451756 kB' 'Inactive: 1442080 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132848 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71100 kB' 'KernelStack: 6224 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.438 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.439 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.440 nr_hugepages=1025 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:28.440 resv_hugepages=0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.440 surplus_hugepages=0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.440 anon_hugepages=0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.440 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7897300 kB' 'MemAvailable: 9453400 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451800 kB' 'Inactive: 1442080 kB' 'Active(anon): 131716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122828 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132848 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71100 kB' 'KernelStack: 6208 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.441 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.442 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7897820 kB' 'MemUsed: 4344152 kB' 'SwapCached: 0 kB' 'Active: 452060 kB' 'Inactive: 1442080 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1772628 kB' 'Mapped: 48692 kB' 'AnonPages: 123088 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132848 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.443 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.444 node0=1025 expecting 1025 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:28.444 00:04:28.444 real 0m0.539s 00:04:28.444 user 0m0.280s 00:04:28.444 sys 0m0.292s 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.444 17:08:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.444 ************************************ 00:04:28.444 END TEST odd_alloc 00:04:28.444 ************************************ 00:04:28.444 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.444 17:08:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:28.444 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.444 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.444 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.444 ************************************ 00:04:28.444 START TEST custom_alloc 00:04:28.444 ************************************ 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.444 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.445 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.015 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.015 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.015 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954664 kB' 'MemAvailable: 10510764 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 452260 kB' 'Inactive: 1442080 kB' 'Active(anon): 132176 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123284 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132892 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71144 kB' 'KernelStack: 6280 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.016 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.017 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954664 kB' 'MemAvailable: 10510764 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451796 kB' 'Inactive: 1442080 kB' 'Active(anon): 131712 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132896 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71148 kB' 'KernelStack: 6276 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.018 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.019 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954664 kB' 'MemAvailable: 10510764 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451812 kB' 'Inactive: 1442080 kB' 'Active(anon): 131728 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123172 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132892 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71144 kB' 'KernelStack: 6276 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.020 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.021 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.022 nr_hugepages=512 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.022 resv_hugepages=0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.022 surplus_hugepages=0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.022 anon_hugepages=0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954924 kB' 'MemAvailable: 10511024 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451812 kB' 'Inactive: 1442080 kB' 'Active(anon): 131728 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132888 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71140 kB' 'KernelStack: 6276 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.023 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8954924 kB' 'MemUsed: 3287048 kB' 'SwapCached: 0 kB' 'Active: 451848 kB' 'Inactive: 1442080 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1772628 kB' 'Mapped: 48632 kB' 'AnonPages: 123176 kB' 'Shmem: 10464 kB' 'KernelStack: 6276 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132888 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.024 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.025 node0=512 expecting 512 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.025 00:04:29.025 real 0m0.533s 00:04:29.025 user 0m0.278s 00:04:29.025 sys 0m0.288s 00:04:29.025 17:08:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.026 17:08:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.026 ************************************ 00:04:29.026 END TEST custom_alloc 00:04:29.026 ************************************ 00:04:29.026 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.026 17:08:47 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.026 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.026 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.026 17:08:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.026 ************************************ 00:04:29.026 START TEST no_shrink_alloc 00:04:29.026 ************************************ 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.026 17:08:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.606 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.606 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913536 kB' 'MemAvailable: 9469636 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 452376 kB' 'Inactive: 1442080 kB' 'Active(anon): 132292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132872 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71124 kB' 'KernelStack: 6212 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.607 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913536 kB' 'MemAvailable: 9469636 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451980 kB' 'Inactive: 1442080 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132872 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71124 kB' 'KernelStack: 6196 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.608 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913536 kB' 'MemAvailable: 9469636 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 452124 kB' 'Inactive: 1442080 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123148 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132864 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71116 kB' 'KernelStack: 6148 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.612 nr_hugepages=1024 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.612 resv_hugepages=0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.612 surplus_hugepages=0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.612 anon_hugepages=0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.612 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913536 kB' 'MemAvailable: 9469636 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 451800 kB' 'Inactive: 1442080 kB' 'Active(anon): 131716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122828 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61748 kB' 'Slab: 132860 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71112 kB' 'KernelStack: 6208 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.613 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.614 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913536 kB' 'MemUsed: 4328436 kB' 'SwapCached: 0 kB' 'Active: 451792 kB' 'Inactive: 1442080 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1772628 kB' 'Mapped: 48692 kB' 'AnonPages: 122820 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61748 kB' 'Slab: 132860 kB' 'SReclaimable: 61748 kB' 'SUnreclaim: 71112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.616 node0=1024 expecting 1024 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.616 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.875 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.875 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.875 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913920 kB' 'MemAvailable: 9470016 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 447684 kB' 'Inactive: 1442080 kB' 'Active(anon): 127600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118708 kB' 'Mapped: 48076 kB' 'Shmem: 10464 kB' 'KReclaimable: 61744 kB' 'Slab: 132664 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 70920 kB' 'KernelStack: 6068 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.138 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.139 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914548 kB' 'MemAvailable: 9470644 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 447356 kB' 'Inactive: 1442080 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 61744 kB' 'Slab: 132652 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 70908 kB' 'KernelStack: 6064 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.140 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.141 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914300 kB' 'MemAvailable: 9470396 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 447024 kB' 'Inactive: 1442080 kB' 'Active(anon): 126940 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118308 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 61744 kB' 'Slab: 132652 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 70908 kB' 'KernelStack: 6096 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.142 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.143 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.144 nr_hugepages=1024 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.144 resv_hugepages=0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.144 surplus_hugepages=0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.144 anon_hugepages=0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.144 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914300 kB' 'MemAvailable: 9470396 kB' 'Buffers: 2436 kB' 'Cached: 1770192 kB' 'SwapCached: 0 kB' 'Active: 446992 kB' 'Inactive: 1442080 kB' 'Active(anon): 126908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118280 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 61744 kB' 'Slab: 132640 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6112 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.145 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.146 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7914300 kB' 'MemUsed: 4327672 kB' 'SwapCached: 0 kB' 'Active: 446960 kB' 'Inactive: 1442080 kB' 'Active(anon): 126876 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1442080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1772628 kB' 'Mapped: 47952 kB' 'AnonPages: 118244 kB' 'Shmem: 10464 kB' 'KernelStack: 6096 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61744 kB' 'Slab: 132636 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 70892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.147 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.148 node0=1024 expecting 1024 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.148 17:08:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.148 00:04:30.148 real 0m1.052s 00:04:30.149 user 0m0.524s 00:04:30.149 sys 0m0.595s 00:04:30.149 17:08:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.149 17:08:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 ************************************ 00:04:30.149 END TEST no_shrink_alloc 00:04:30.149 ************************************ 00:04:30.149 17:08:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.149 17:08:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.149 ************************************ 00:04:30.149 END TEST hugepages 00:04:30.149 ************************************ 00:04:30.149 00:04:30.149 real 0m4.610s 00:04:30.149 user 0m2.237s 00:04:30.149 sys 0m2.489s 00:04:30.149 17:08:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.149 17:08:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 17:08:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.149 17:08:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.149 17:08:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.149 17:08:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.149 17:08:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 ************************************ 00:04:30.149 START TEST driver 00:04:30.149 ************************************ 00:04:30.149 17:08:49 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.407 * Looking for test storage... 00:04:30.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.407 17:08:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.407 17:08:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.408 17:08:49 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.976 17:08:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.976 17:08:49 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.976 17:08:49 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.976 17:08:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.976 ************************************ 00:04:30.976 START TEST guess_driver 00:04:30.976 ************************************ 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:30.976 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.976 Looking for driver=uio_pci_generic 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.976 17:08:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.542 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:31.542 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:31.542 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.801 17:08:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.367 00:04:32.367 real 0m1.475s 00:04:32.367 user 0m0.561s 00:04:32.367 sys 0m0.896s 00:04:32.367 17:08:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.367 17:08:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.367 ************************************ 00:04:32.367 END TEST guess_driver 00:04:32.367 ************************************ 00:04:32.367 17:08:51 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:32.367 00:04:32.367 real 0m2.164s 00:04:32.367 user 0m0.783s 00:04:32.367 sys 0m1.409s 00:04:32.367 17:08:51 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.367 17:08:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.367 ************************************ 00:04:32.367 END TEST driver 00:04:32.367 ************************************ 00:04:32.367 17:08:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.367 17:08:51 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:32.367 17:08:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.367 17:08:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.367 17:08:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.367 ************************************ 00:04:32.367 START TEST devices 00:04:32.367 ************************************ 00:04:32.367 17:08:51 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:32.625 * Looking for test storage... 00:04:32.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:32.625 17:08:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.625 17:08:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.625 17:08:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.625 17:08:51 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.561 No valid GPT data, bailing 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:33.561 No valid GPT data, bailing 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:33.561 No valid GPT data, bailing 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:33.561 No valid GPT data, bailing 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.561 17:08:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:33.561 17:08:52 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.561 17:08:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.561 17:08:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.561 ************************************ 00:04:33.561 START TEST nvme_mount 00:04:33.561 ************************************ 00:04:33.561 17:08:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:33.561 17:08:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.562 17:08:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:35.008 Creating new GPT entries in memory. 00:04:35.008 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.008 other utilities. 00:04:35.008 17:08:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.008 17:08:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.008 17:08:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.008 17:08:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.008 17:08:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:35.943 Creating new GPT entries in memory. 00:04:35.943 The operation has completed successfully. 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57141 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.943 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.201 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.201 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.201 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.201 17:08:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.201 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.201 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.460 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.460 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.460 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.460 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.460 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.718 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.976 17:08:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.235 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.494 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.494 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.494 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.494 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.752 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.752 00:04:37.752 real 0m3.990s 00:04:37.752 user 0m0.669s 00:04:37.752 sys 0m1.054s 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.752 17:08:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.752 ************************************ 00:04:37.752 END TEST nvme_mount 00:04:37.752 ************************************ 00:04:37.752 17:08:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.752 17:08:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.752 17:08:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.752 17:08:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.752 17:08:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.752 ************************************ 00:04:37.752 START TEST dm_mount 00:04:37.752 ************************************ 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.753 17:08:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.689 Creating new GPT entries in memory. 00:04:38.689 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.689 other utilities. 00:04:38.689 17:08:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.689 17:08:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.689 17:08:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.689 17:08:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.689 17:08:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:40.066 Creating new GPT entries in memory. 00:04:40.066 The operation has completed successfully. 00:04:40.066 17:08:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.066 17:08:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.067 17:08:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.067 17:08:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.067 17:08:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:41.002 The operation has completed successfully. 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57578 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:41.002 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.003 17:08:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.261 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.519 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.777 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:42.035 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.035 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.035 17:09:00 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:42.035 00:04:42.035 real 0m4.222s 00:04:42.035 user 0m0.452s 00:04:42.035 sys 0m0.716s 00:04:42.035 17:09:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.035 17:09:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.035 ************************************ 00:04:42.035 END TEST dm_mount 00:04:42.035 ************************************ 00:04:42.035 17:09:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.035 17:09:00 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.293 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.294 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.294 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.294 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.294 17:09:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.294 00:04:42.294 real 0m9.786s 00:04:42.294 user 0m1.801s 00:04:42.294 sys 0m2.381s 00:04:42.294 17:09:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.294 17:09:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.294 ************************************ 00:04:42.294 END TEST devices 00:04:42.294 ************************************ 00:04:42.294 17:09:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.294 ************************************ 00:04:42.294 END TEST setup.sh 00:04:42.294 ************************************ 00:04:42.294 00:04:42.294 real 0m21.668s 00:04:42.294 user 0m7.067s 00:04:42.294 sys 0m9.097s 00:04:42.294 17:09:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.294 17:09:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.294 17:09:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.294 17:09:01 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.861 Hugepages 00:04:42.861 node hugesize free / total 00:04:42.861 node0 1048576kB 0 / 0 00:04:42.861 node0 2048kB 2048 / 2048 00:04:42.861 00:04:42.861 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.119 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:43.119 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:43.119 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:43.119 17:09:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:43.119 17:09:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:43.119 17:09:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:43.119 17:09:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.055 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.055 17:09:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:44.990 17:09:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:44.990 17:09:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:44.990 17:09:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.990 17:09:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:44.990 17:09:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:44.990 17:09:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:44.990 17:09:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.990 17:09:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:44.990 17:09:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:45.248 17:09:03 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:45.248 17:09:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.248 17:09:03 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.507 Waiting for block devices as requested 00:04:45.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.766 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.766 17:09:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:45.766 17:09:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:45.766 17:09:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:45.766 17:09:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:45.766 17:09:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1557 -- # continue 00:04:45.766 17:09:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:45.766 17:09:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:45.766 17:09:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:45.766 17:09:04 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:45.766 17:09:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:45.766 17:09:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:45.766 17:09:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:45.766 17:09:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:45.766 17:09:04 -- common/autotest_common.sh@1557 -- # continue 00:04:45.766 17:09:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:45.766 17:09:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.766 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:45.766 17:09:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:45.766 17:09:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.766 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:45.766 17:09:04 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.702 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.702 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.702 17:09:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:46.702 17:09:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.702 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.702 17:09:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:46.702 17:09:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:46.702 17:09:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.702 17:09:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:46.702 17:09:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:46.702 17:09:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:46.702 17:09:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:46.702 17:09:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:46.702 17:09:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.702 17:09:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.702 17:09:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:46.960 17:09:05 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:46.960 17:09:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:46.960 17:09:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.960 17:09:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.960 17:09:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.960 17:09:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.960 17:09:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.960 17:09:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.960 17:09:05 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.960 17:09:05 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.960 17:09:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:46.960 17:09:05 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:46.960 17:09:05 -- common/autotest_common.sh@1593 -- # return 0 00:04:46.960 17:09:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:46.960 17:09:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:46.960 17:09:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.960 17:09:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.960 17:09:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:46.960 17:09:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.960 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.960 17:09:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:46.960 17:09:05 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.960 17:09:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.960 17:09:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.960 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.960 ************************************ 00:04:46.960 START TEST env 00:04:46.960 ************************************ 00:04:46.960 17:09:05 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.960 * Looking for test storage... 00:04:46.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.960 17:09:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.960 17:09:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.960 17:09:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.960 17:09:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.960 ************************************ 00:04:46.960 START TEST env_memory 00:04:46.960 ************************************ 00:04:46.960 17:09:05 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.960 00:04:46.960 00:04:46.960 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.960 http://cunit.sourceforge.net/ 00:04:46.960 00:04:46.960 00:04:46.960 Suite: memory 00:04:46.960 Test: alloc and free memory map ...[2024-07-22 17:09:05.858165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.960 passed 00:04:47.219 Test: mem map translation ...[2024-07-22 17:09:05.919121] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:47.219 [2024-07-22 17:09:05.919214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:47.219 [2024-07-22 17:09:05.919318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:47.219 [2024-07-22 17:09:05.919352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:47.219 passed 00:04:47.219 Test: mem map registration ...[2024-07-22 17:09:06.017707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:47.219 [2024-07-22 17:09:06.017783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:47.219 passed 00:04:47.219 Test: mem map adjacent registrations ...passed 00:04:47.219 00:04:47.219 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.219 suites 1 1 n/a 0 0 00:04:47.219 tests 4 4 4 0 0 00:04:47.219 asserts 152 152 152 0 n/a 00:04:47.219 00:04:47.219 Elapsed time = 0.343 seconds 00:04:47.478 00:04:47.478 real 0m0.389s 00:04:47.478 user 0m0.360s 00:04:47.478 sys 0m0.023s 00:04:47.478 ************************************ 00:04:47.478 END TEST env_memory 00:04:47.478 ************************************ 00:04:47.478 17:09:06 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.478 17:09:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:47.478 17:09:06 env -- common/autotest_common.sh@1142 -- # return 0 00:04:47.478 17:09:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.478 17:09:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.478 17:09:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.478 17:09:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.478 ************************************ 00:04:47.478 START TEST env_vtophys 00:04:47.478 ************************************ 00:04:47.478 17:09:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.478 EAL: lib.eal log level changed from notice to debug 00:04:47.478 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 1 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 2 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 3 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 4 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 5 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 6 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 7 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 8 as core 0 on socket 0 00:04:47.478 EAL: Detected lcore 9 as core 0 on socket 0 00:04:47.478 EAL: Maximum logical cores by configuration: 128 00:04:47.478 EAL: Detected CPU lcores: 10 00:04:47.478 EAL: Detected NUMA nodes: 1 00:04:47.478 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:47.478 EAL: Detected shared linkage of DPDK 00:04:47.478 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.478 EAL: Selected IOVA mode 'PA' 00:04:47.478 EAL: Probing VFIO support... 00:04:47.478 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.478 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.478 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.478 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.478 EAL: Setting up physically contiguous memory... 00:04:47.478 EAL: Setting maximum number of open files to 524288 00:04:47.478 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.478 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.478 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.478 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.478 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.478 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.478 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.478 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.479 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.479 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.479 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.479 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.479 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.479 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.479 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.479 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.479 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.479 EAL: Hugepages will be freed exactly as allocated. 00:04:47.479 EAL: No shared files mode enabled, IPC is disabled 00:04:47.479 EAL: No shared files mode enabled, IPC is disabled 00:04:47.479 EAL: TSC frequency is ~2200000 KHz 00:04:47.479 EAL: Main lcore 0 is ready (tid=7f94258d3a40;cpuset=[0]) 00:04:47.479 EAL: Trying to obtain current memory policy. 00:04:47.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.737 EAL: Restoring previous memory policy: 0 00:04:47.737 EAL: request: mp_malloc_sync 00:04:47.737 EAL: No shared files mode enabled, IPC is disabled 00:04:47.737 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.737 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.737 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.737 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.737 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.737 00:04:47.737 00:04:47.737 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.737 http://cunit.sourceforge.net/ 00:04:47.737 00:04:47.737 00:04:47.737 Suite: components_suite 00:04:48.059 Test: vtophys_malloc_test ...passed 00:04:48.059 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.059 EAL: Restoring previous memory policy: 4 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.059 EAL: Trying to obtain current memory policy. 00:04:48.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.059 EAL: Restoring previous memory policy: 4 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.059 EAL: Trying to obtain current memory policy. 00:04:48.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.059 EAL: Restoring previous memory policy: 4 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.059 EAL: Trying to obtain current memory policy. 00:04:48.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.059 EAL: Restoring previous memory policy: 4 00:04:48.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.059 EAL: request: mp_malloc_sync 00:04:48.059 EAL: No shared files mode enabled, IPC is disabled 00:04:48.059 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.318 EAL: request: mp_malloc_sync 00:04:48.318 EAL: No shared files mode enabled, IPC is disabled 00:04:48.318 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.318 EAL: Trying to obtain current memory policy. 00:04:48.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.318 EAL: Restoring previous memory policy: 4 00:04:48.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.318 EAL: request: mp_malloc_sync 00:04:48.318 EAL: No shared files mode enabled, IPC is disabled 00:04:48.318 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.318 EAL: request: mp_malloc_sync 00:04:48.318 EAL: No shared files mode enabled, IPC is disabled 00:04:48.318 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.318 EAL: Trying to obtain current memory policy. 00:04:48.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.318 EAL: Restoring previous memory policy: 4 00:04:48.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.318 EAL: request: mp_malloc_sync 00:04:48.318 EAL: No shared files mode enabled, IPC is disabled 00:04:48.318 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.578 EAL: request: mp_malloc_sync 00:04:48.578 EAL: No shared files mode enabled, IPC is disabled 00:04:48.578 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.578 EAL: Trying to obtain current memory policy. 00:04:48.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.578 EAL: Restoring previous memory policy: 4 00:04:48.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.578 EAL: request: mp_malloc_sync 00:04:48.578 EAL: No shared files mode enabled, IPC is disabled 00:04:48.578 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.836 EAL: request: mp_malloc_sync 00:04:48.836 EAL: No shared files mode enabled, IPC is disabled 00:04:48.836 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.095 EAL: Trying to obtain current memory policy. 00:04:49.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.095 EAL: Restoring previous memory policy: 4 00:04:49.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.095 EAL: request: mp_malloc_sync 00:04:49.095 EAL: No shared files mode enabled, IPC is disabled 00:04:49.095 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.661 EAL: request: mp_malloc_sync 00:04:49.661 EAL: No shared files mode enabled, IPC is disabled 00:04:49.661 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.921 EAL: Trying to obtain current memory policy. 00:04:49.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.179 EAL: Restoring previous memory policy: 4 00:04:50.179 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.179 EAL: request: mp_malloc_sync 00:04:50.179 EAL: No shared files mode enabled, IPC is disabled 00:04:50.179 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.114 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.114 EAL: request: mp_malloc_sync 00:04:51.114 EAL: No shared files mode enabled, IPC is disabled 00:04:51.114 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.682 EAL: Trying to obtain current memory policy. 00:04:51.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.248 EAL: Restoring previous memory policy: 4 00:04:52.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.248 EAL: request: mp_malloc_sync 00:04:52.248 EAL: No shared files mode enabled, IPC is disabled 00:04:52.248 EAL: Heap on socket 0 was expanded by 1026MB 00:04:53.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.885 EAL: request: mp_malloc_sync 00:04:53.885 EAL: No shared files mode enabled, IPC is disabled 00:04:53.885 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.787 passed 00:04:55.787 00:04:55.787 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.787 suites 1 1 n/a 0 0 00:04:55.787 tests 2 2 2 0 0 00:04:55.787 asserts 5390 5390 5390 0 n/a 00:04:55.787 00:04:55.787 Elapsed time = 7.751 seconds 00:04:55.787 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.787 EAL: request: mp_malloc_sync 00:04:55.787 EAL: No shared files mode enabled, IPC is disabled 00:04:55.787 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.787 EAL: No shared files mode enabled, IPC is disabled 00:04:55.787 EAL: No shared files mode enabled, IPC is disabled 00:04:55.787 EAL: No shared files mode enabled, IPC is disabled 00:04:55.787 00:04:55.787 real 0m8.075s 00:04:55.787 user 0m6.854s 00:04:55.787 sys 0m1.058s 00:04:55.787 17:09:14 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.787 ************************************ 00:04:55.787 END TEST env_vtophys 00:04:55.787 17:09:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.787 ************************************ 00:04:55.787 17:09:14 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.787 17:09:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.787 17:09:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.787 17:09:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.787 17:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.787 ************************************ 00:04:55.787 START TEST env_pci 00:04:55.787 ************************************ 00:04:55.787 17:09:14 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.787 00:04:55.787 00:04:55.787 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.787 http://cunit.sourceforge.net/ 00:04:55.787 00:04:55.787 00:04:55.787 Suite: pci 00:04:55.787 Test: pci_hook ...[2024-07-22 17:09:14.393327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58845 has claimed it 00:04:55.787 EAL: Cannot find device (10000:00:01.0) 00:04:55.787 EAL: Failed to attach device on primary process 00:04:55.787 passed 00:04:55.787 00:04:55.787 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.787 suites 1 1 n/a 0 0 00:04:55.787 tests 1 1 1 0 0 00:04:55.787 asserts 25 25 25 0 n/a 00:04:55.787 00:04:55.787 Elapsed time = 0.008 seconds 00:04:55.787 00:04:55.787 real 0m0.091s 00:04:55.787 user 0m0.048s 00:04:55.787 sys 0m0.041s 00:04:55.787 17:09:14 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.787 17:09:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.787 ************************************ 00:04:55.787 END TEST env_pci 00:04:55.787 ************************************ 00:04:55.787 17:09:14 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.787 17:09:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.787 17:09:14 env -- env/env.sh@15 -- # uname 00:04:55.787 17:09:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.787 17:09:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.787 17:09:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.788 17:09:14 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:55.788 17:09:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.788 17:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.788 ************************************ 00:04:55.788 START TEST env_dpdk_post_init 00:04:55.788 ************************************ 00:04:55.788 17:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.788 EAL: Detected CPU lcores: 10 00:04:55.788 EAL: Detected NUMA nodes: 1 00:04:55.788 EAL: Detected shared linkage of DPDK 00:04:55.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.788 EAL: Selected IOVA mode 'PA' 00:04:55.788 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.788 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.046 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.046 Starting DPDK initialization... 00:04:56.046 Starting SPDK post initialization... 00:04:56.046 SPDK NVMe probe 00:04:56.046 Attaching to 0000:00:10.0 00:04:56.046 Attaching to 0000:00:11.0 00:04:56.046 Attached to 0000:00:10.0 00:04:56.046 Attached to 0000:00:11.0 00:04:56.046 Cleaning up... 00:04:56.046 00:04:56.046 real 0m0.288s 00:04:56.046 user 0m0.097s 00:04:56.046 sys 0m0.091s 00:04:56.046 17:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.046 17:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 ************************************ 00:04:56.046 END TEST env_dpdk_post_init 00:04:56.046 ************************************ 00:04:56.046 17:09:14 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.046 17:09:14 env -- env/env.sh@26 -- # uname 00:04:56.046 17:09:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.046 17:09:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.046 17:09:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.046 17:09:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.046 17:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 ************************************ 00:04:56.046 START TEST env_mem_callbacks 00:04:56.046 ************************************ 00:04:56.046 17:09:14 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.046 EAL: Detected CPU lcores: 10 00:04:56.046 EAL: Detected NUMA nodes: 1 00:04:56.046 EAL: Detected shared linkage of DPDK 00:04:56.046 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.046 EAL: Selected IOVA mode 'PA' 00:04:56.305 00:04:56.305 00:04:56.305 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.305 http://cunit.sourceforge.net/ 00:04:56.305 00:04:56.305 00:04:56.305 Suite: memory 00:04:56.305 Test: test ... 00:04:56.305 register 0x200000200000 2097152 00:04:56.305 malloc 3145728 00:04:56.305 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.305 register 0x200000400000 4194304 00:04:56.305 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.305 malloc 64 00:04:56.305 buf 0x2000004ffec0 len 64 PASSED 00:04:56.305 malloc 4194304 00:04:56.305 register 0x200000800000 6291456 00:04:56.305 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.305 free 0x2000004fffc0 3145728 00:04:56.305 free 0x2000004ffec0 64 00:04:56.305 unregister 0x200000400000 4194304 PASSED 00:04:56.305 free 0x2000009fffc0 4194304 00:04:56.305 unregister 0x200000800000 6291456 PASSED 00:04:56.305 malloc 8388608 00:04:56.305 register 0x200000400000 10485760 00:04:56.305 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.305 free 0x2000005fffc0 8388608 00:04:56.305 unregister 0x200000400000 10485760 PASSED 00:04:56.305 passed 00:04:56.305 00:04:56.305 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.305 suites 1 1 n/a 0 0 00:04:56.305 tests 1 1 1 0 0 00:04:56.305 asserts 15 15 15 0 n/a 00:04:56.305 00:04:56.305 Elapsed time = 0.060 seconds 00:04:56.305 00:04:56.305 real 0m0.255s 00:04:56.305 user 0m0.083s 00:04:56.305 sys 0m0.071s 00:04:56.305 17:09:15 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.305 17:09:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.305 ************************************ 00:04:56.305 END TEST env_mem_callbacks 00:04:56.305 ************************************ 00:04:56.305 17:09:15 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.305 00:04:56.305 real 0m9.453s 00:04:56.305 user 0m7.563s 00:04:56.305 sys 0m1.507s 00:04:56.305 17:09:15 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.305 17:09:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.305 ************************************ 00:04:56.305 END TEST env 00:04:56.305 ************************************ 00:04:56.305 17:09:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.305 17:09:15 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.305 17:09:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.305 17:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.305 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:56.305 ************************************ 00:04:56.305 START TEST rpc 00:04:56.305 ************************************ 00:04:56.305 17:09:15 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.564 * Looking for test storage... 00:04:56.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.564 17:09:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58964 00:04:56.564 17:09:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.564 17:09:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:56.564 17:09:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58964 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@829 -- # '[' -z 58964 ']' 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.564 17:09:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.564 [2024-07-22 17:09:15.466457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:56.564 [2024-07-22 17:09:15.466671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:04:56.822 [2024-07-22 17:09:15.644383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.080 [2024-07-22 17:09:15.991462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.080 [2024-07-22 17:09:15.991612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58964' to capture a snapshot of events at runtime. 00:04:57.080 [2024-07-22 17:09:15.991650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.080 [2024-07-22 17:09:15.991676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.080 [2024-07-22 17:09:15.991703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58964 for offline analysis/debug. 00:04:57.080 [2024-07-22 17:09:15.991775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.015 17:09:16 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.015 17:09:16 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:58.015 17:09:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.015 17:09:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.015 17:09:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.015 17:09:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.015 17:09:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.015 17:09:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.015 17:09:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.015 ************************************ 00:04:58.015 START TEST rpc_integrity 00:04:58.015 ************************************ 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.015 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.015 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.274 17:09:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.274 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.274 { 00:04:58.274 "name": "Malloc0", 00:04:58.274 "aliases": [ 00:04:58.274 "4f7766ea-6441-4a6a-80b8-64b837a6cc9b" 00:04:58.274 ], 00:04:58.274 "product_name": "Malloc disk", 00:04:58.274 "block_size": 512, 00:04:58.274 "num_blocks": 16384, 00:04:58.274 "uuid": "4f7766ea-6441-4a6a-80b8-64b837a6cc9b", 00:04:58.274 "assigned_rate_limits": { 00:04:58.274 "rw_ios_per_sec": 0, 00:04:58.274 "rw_mbytes_per_sec": 0, 00:04:58.274 "r_mbytes_per_sec": 0, 00:04:58.274 "w_mbytes_per_sec": 0 00:04:58.274 }, 00:04:58.274 "claimed": false, 00:04:58.274 "zoned": false, 00:04:58.274 "supported_io_types": { 00:04:58.274 "read": true, 00:04:58.274 "write": true, 00:04:58.274 "unmap": true, 00:04:58.274 "flush": true, 00:04:58.274 "reset": true, 00:04:58.274 "nvme_admin": false, 00:04:58.274 "nvme_io": false, 00:04:58.274 "nvme_io_md": false, 00:04:58.274 "write_zeroes": true, 00:04:58.274 "zcopy": true, 00:04:58.274 "get_zone_info": false, 00:04:58.274 "zone_management": false, 00:04:58.274 "zone_append": false, 00:04:58.274 "compare": false, 00:04:58.274 "compare_and_write": false, 00:04:58.274 "abort": true, 00:04:58.274 "seek_hole": false, 00:04:58.274 "seek_data": false, 00:04:58.274 "copy": true, 00:04:58.274 "nvme_iov_md": false 00:04:58.274 }, 00:04:58.274 "memory_domains": [ 00:04:58.274 { 00:04:58.274 "dma_device_id": "system", 00:04:58.274 "dma_device_type": 1 00:04:58.274 }, 00:04:58.274 { 00:04:58.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.274 "dma_device_type": 2 00:04:58.274 } 00:04:58.274 ], 00:04:58.274 "driver_specific": {} 00:04:58.274 } 00:04:58.274 ]' 00:04:58.274 17:09:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.274 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.274 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.275 [2024-07-22 17:09:17.030081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.275 [2024-07-22 17:09:17.030182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.275 [2024-07-22 17:09:17.030234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:04:58.275 [2024-07-22 17:09:17.030259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.275 [2024-07-22 17:09:17.034077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.275 [2024-07-22 17:09:17.034136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.275 Passthru0 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.275 { 00:04:58.275 "name": "Malloc0", 00:04:58.275 "aliases": [ 00:04:58.275 "4f7766ea-6441-4a6a-80b8-64b837a6cc9b" 00:04:58.275 ], 00:04:58.275 "product_name": "Malloc disk", 00:04:58.275 "block_size": 512, 00:04:58.275 "num_blocks": 16384, 00:04:58.275 "uuid": "4f7766ea-6441-4a6a-80b8-64b837a6cc9b", 00:04:58.275 "assigned_rate_limits": { 00:04:58.275 "rw_ios_per_sec": 0, 00:04:58.275 "rw_mbytes_per_sec": 0, 00:04:58.275 "r_mbytes_per_sec": 0, 00:04:58.275 "w_mbytes_per_sec": 0 00:04:58.275 }, 00:04:58.275 "claimed": true, 00:04:58.275 "claim_type": "exclusive_write", 00:04:58.275 "zoned": false, 00:04:58.275 "supported_io_types": { 00:04:58.275 "read": true, 00:04:58.275 "write": true, 00:04:58.275 "unmap": true, 00:04:58.275 "flush": true, 00:04:58.275 "reset": true, 00:04:58.275 "nvme_admin": false, 00:04:58.275 "nvme_io": false, 00:04:58.275 "nvme_io_md": false, 00:04:58.275 "write_zeroes": true, 00:04:58.275 "zcopy": true, 00:04:58.275 "get_zone_info": false, 00:04:58.275 "zone_management": false, 00:04:58.275 "zone_append": false, 00:04:58.275 "compare": false, 00:04:58.275 "compare_and_write": false, 00:04:58.275 "abort": true, 00:04:58.275 "seek_hole": false, 00:04:58.275 "seek_data": false, 00:04:58.275 "copy": true, 00:04:58.275 "nvme_iov_md": false 00:04:58.275 }, 00:04:58.275 "memory_domains": [ 00:04:58.275 { 00:04:58.275 "dma_device_id": "system", 00:04:58.275 "dma_device_type": 1 00:04:58.275 }, 00:04:58.275 { 00:04:58.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.275 "dma_device_type": 2 00:04:58.275 } 00:04:58.275 ], 00:04:58.275 "driver_specific": {} 00:04:58.275 }, 00:04:58.275 { 00:04:58.275 "name": "Passthru0", 00:04:58.275 "aliases": [ 00:04:58.275 "ab94382f-10d7-56a3-babc-4a264395a10a" 00:04:58.275 ], 00:04:58.275 "product_name": "passthru", 00:04:58.275 "block_size": 512, 00:04:58.275 "num_blocks": 16384, 00:04:58.275 "uuid": "ab94382f-10d7-56a3-babc-4a264395a10a", 00:04:58.275 "assigned_rate_limits": { 00:04:58.275 "rw_ios_per_sec": 0, 00:04:58.275 "rw_mbytes_per_sec": 0, 00:04:58.275 "r_mbytes_per_sec": 0, 00:04:58.275 "w_mbytes_per_sec": 0 00:04:58.275 }, 00:04:58.275 "claimed": false, 00:04:58.275 "zoned": false, 00:04:58.275 "supported_io_types": { 00:04:58.275 "read": true, 00:04:58.275 "write": true, 00:04:58.275 "unmap": true, 00:04:58.275 "flush": true, 00:04:58.275 "reset": true, 00:04:58.275 "nvme_admin": false, 00:04:58.275 "nvme_io": false, 00:04:58.275 "nvme_io_md": false, 00:04:58.275 "write_zeroes": true, 00:04:58.275 "zcopy": true, 00:04:58.275 "get_zone_info": false, 00:04:58.275 "zone_management": false, 00:04:58.275 "zone_append": false, 00:04:58.275 "compare": false, 00:04:58.275 "compare_and_write": false, 00:04:58.275 "abort": true, 00:04:58.275 "seek_hole": false, 00:04:58.275 "seek_data": false, 00:04:58.275 "copy": true, 00:04:58.275 "nvme_iov_md": false 00:04:58.275 }, 00:04:58.275 "memory_domains": [ 00:04:58.275 { 00:04:58.275 "dma_device_id": "system", 00:04:58.275 "dma_device_type": 1 00:04:58.275 }, 00:04:58.275 { 00:04:58.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.275 "dma_device_type": 2 00:04:58.275 } 00:04:58.275 ], 00:04:58.275 "driver_specific": { 00:04:58.275 "passthru": { 00:04:58.275 "name": "Passthru0", 00:04:58.275 "base_bdev_name": "Malloc0" 00:04:58.275 } 00:04:58.275 } 00:04:58.275 } 00:04:58.275 ]' 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.275 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.275 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.547 17:09:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.547 00:04:58.547 real 0m0.387s 00:04:58.547 user 0m0.242s 00:04:58.547 sys 0m0.047s 00:04:58.547 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.547 ************************************ 00:04:58.547 END TEST rpc_integrity 00:04:58.547 17:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 ************************************ 00:04:58.547 17:09:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.547 17:09:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.547 17:09:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.547 17:09:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.547 17:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 ************************************ 00:04:58.547 START TEST rpc_plugins 00:04:58.547 ************************************ 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.547 { 00:04:58.547 "name": "Malloc1", 00:04:58.547 "aliases": [ 00:04:58.547 "8f8ba0c0-1d42-4a89-8791-c8306196475f" 00:04:58.547 ], 00:04:58.547 "product_name": "Malloc disk", 00:04:58.547 "block_size": 4096, 00:04:58.547 "num_blocks": 256, 00:04:58.547 "uuid": "8f8ba0c0-1d42-4a89-8791-c8306196475f", 00:04:58.547 "assigned_rate_limits": { 00:04:58.547 "rw_ios_per_sec": 0, 00:04:58.547 "rw_mbytes_per_sec": 0, 00:04:58.547 "r_mbytes_per_sec": 0, 00:04:58.547 "w_mbytes_per_sec": 0 00:04:58.547 }, 00:04:58.547 "claimed": false, 00:04:58.547 "zoned": false, 00:04:58.547 "supported_io_types": { 00:04:58.547 "read": true, 00:04:58.547 "write": true, 00:04:58.547 "unmap": true, 00:04:58.547 "flush": true, 00:04:58.547 "reset": true, 00:04:58.547 "nvme_admin": false, 00:04:58.547 "nvme_io": false, 00:04:58.547 "nvme_io_md": false, 00:04:58.547 "write_zeroes": true, 00:04:58.547 "zcopy": true, 00:04:58.547 "get_zone_info": false, 00:04:58.547 "zone_management": false, 00:04:58.547 "zone_append": false, 00:04:58.547 "compare": false, 00:04:58.547 "compare_and_write": false, 00:04:58.547 "abort": true, 00:04:58.547 "seek_hole": false, 00:04:58.547 "seek_data": false, 00:04:58.547 "copy": true, 00:04:58.547 "nvme_iov_md": false 00:04:58.547 }, 00:04:58.547 "memory_domains": [ 00:04:58.547 { 00:04:58.547 "dma_device_id": "system", 00:04:58.547 "dma_device_type": 1 00:04:58.547 }, 00:04:58.547 { 00:04:58.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.547 "dma_device_type": 2 00:04:58.547 } 00:04:58.547 ], 00:04:58.547 "driver_specific": {} 00:04:58.547 } 00:04:58.547 ]' 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.547 17:09:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.547 00:04:58.547 real 0m0.180s 00:04:58.547 user 0m0.121s 00:04:58.547 sys 0m0.024s 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.547 ************************************ 00:04:58.547 END TEST rpc_plugins 00:04:58.547 ************************************ 00:04:58.547 17:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 17:09:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.851 17:09:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.851 17:09:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.851 17:09:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.851 17:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 ************************************ 00:04:58.851 START TEST rpc_trace_cmd_test 00:04:58.851 ************************************ 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.851 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.851 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58964", 00:04:58.851 "tpoint_group_mask": "0x8", 00:04:58.851 "iscsi_conn": { 00:04:58.851 "mask": "0x2", 00:04:58.851 "tpoint_mask": "0x0" 00:04:58.851 }, 00:04:58.851 "scsi": { 00:04:58.851 "mask": "0x4", 00:04:58.851 "tpoint_mask": "0x0" 00:04:58.851 }, 00:04:58.851 "bdev": { 00:04:58.851 "mask": "0x8", 00:04:58.851 "tpoint_mask": "0xffffffffffffffff" 00:04:58.851 }, 00:04:58.851 "nvmf_rdma": { 00:04:58.851 "mask": "0x10", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "nvmf_tcp": { 00:04:58.852 "mask": "0x20", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "ftl": { 00:04:58.852 "mask": "0x40", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "blobfs": { 00:04:58.852 "mask": "0x80", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "dsa": { 00:04:58.852 "mask": "0x200", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "thread": { 00:04:58.852 "mask": "0x400", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "nvme_pcie": { 00:04:58.852 "mask": "0x800", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "iaa": { 00:04:58.852 "mask": "0x1000", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "nvme_tcp": { 00:04:58.852 "mask": "0x2000", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "bdev_nvme": { 00:04:58.852 "mask": "0x4000", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 }, 00:04:58.852 "sock": { 00:04:58.852 "mask": "0x8000", 00:04:58.852 "tpoint_mask": "0x0" 00:04:58.852 } 00:04:58.852 }' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.852 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.111 17:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.111 00:04:59.111 real 0m0.272s 00:04:59.111 user 0m0.240s 00:04:59.111 sys 0m0.024s 00:04:59.111 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.111 17:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 ************************************ 00:04:59.111 END TEST rpc_trace_cmd_test 00:04:59.111 ************************************ 00:04:59.111 17:09:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.111 17:09:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.111 17:09:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.111 17:09:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.111 17:09:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.111 17:09:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.111 17:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 ************************************ 00:04:59.111 START TEST rpc_daemon_integrity 00:04:59.111 ************************************ 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.111 { 00:04:59.111 "name": "Malloc2", 00:04:59.111 "aliases": [ 00:04:59.111 "af4cc828-eb8a-454f-bc5f-d88d0009a782" 00:04:59.111 ], 00:04:59.111 "product_name": "Malloc disk", 00:04:59.111 "block_size": 512, 00:04:59.111 "num_blocks": 16384, 00:04:59.111 "uuid": "af4cc828-eb8a-454f-bc5f-d88d0009a782", 00:04:59.111 "assigned_rate_limits": { 00:04:59.111 "rw_ios_per_sec": 0, 00:04:59.111 "rw_mbytes_per_sec": 0, 00:04:59.111 "r_mbytes_per_sec": 0, 00:04:59.111 "w_mbytes_per_sec": 0 00:04:59.111 }, 00:04:59.111 "claimed": false, 00:04:59.111 "zoned": false, 00:04:59.111 "supported_io_types": { 00:04:59.111 "read": true, 00:04:59.111 "write": true, 00:04:59.111 "unmap": true, 00:04:59.111 "flush": true, 00:04:59.111 "reset": true, 00:04:59.111 "nvme_admin": false, 00:04:59.111 "nvme_io": false, 00:04:59.111 "nvme_io_md": false, 00:04:59.111 "write_zeroes": true, 00:04:59.111 "zcopy": true, 00:04:59.111 "get_zone_info": false, 00:04:59.111 "zone_management": false, 00:04:59.111 "zone_append": false, 00:04:59.111 "compare": false, 00:04:59.111 "compare_and_write": false, 00:04:59.111 "abort": true, 00:04:59.111 "seek_hole": false, 00:04:59.111 "seek_data": false, 00:04:59.111 "copy": true, 00:04:59.111 "nvme_iov_md": false 00:04:59.111 }, 00:04:59.111 "memory_domains": [ 00:04:59.111 { 00:04:59.111 "dma_device_id": "system", 00:04:59.111 "dma_device_type": 1 00:04:59.111 }, 00:04:59.111 { 00:04:59.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.111 "dma_device_type": 2 00:04:59.111 } 00:04:59.111 ], 00:04:59.111 "driver_specific": {} 00:04:59.111 } 00:04:59.111 ]' 00:04:59.111 17:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 [2024-07-22 17:09:18.018736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.111 [2024-07-22 17:09:18.018813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.111 [2024-07-22 17:09:18.018849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:04:59.111 [2024-07-22 17:09:18.018865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.111 [2024-07-22 17:09:18.021902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.111 [2024-07-22 17:09:18.021990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.111 Passthru0 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.111 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.112 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.112 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.112 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.112 { 00:04:59.112 "name": "Malloc2", 00:04:59.112 "aliases": [ 00:04:59.112 "af4cc828-eb8a-454f-bc5f-d88d0009a782" 00:04:59.112 ], 00:04:59.112 "product_name": "Malloc disk", 00:04:59.112 "block_size": 512, 00:04:59.112 "num_blocks": 16384, 00:04:59.112 "uuid": "af4cc828-eb8a-454f-bc5f-d88d0009a782", 00:04:59.112 "assigned_rate_limits": { 00:04:59.112 "rw_ios_per_sec": 0, 00:04:59.112 "rw_mbytes_per_sec": 0, 00:04:59.112 "r_mbytes_per_sec": 0, 00:04:59.112 "w_mbytes_per_sec": 0 00:04:59.112 }, 00:04:59.112 "claimed": true, 00:04:59.112 "claim_type": "exclusive_write", 00:04:59.112 "zoned": false, 00:04:59.112 "supported_io_types": { 00:04:59.112 "read": true, 00:04:59.112 "write": true, 00:04:59.112 "unmap": true, 00:04:59.112 "flush": true, 00:04:59.112 "reset": true, 00:04:59.112 "nvme_admin": false, 00:04:59.112 "nvme_io": false, 00:04:59.112 "nvme_io_md": false, 00:04:59.112 "write_zeroes": true, 00:04:59.112 "zcopy": true, 00:04:59.112 "get_zone_info": false, 00:04:59.112 "zone_management": false, 00:04:59.112 "zone_append": false, 00:04:59.112 "compare": false, 00:04:59.112 "compare_and_write": false, 00:04:59.112 "abort": true, 00:04:59.112 "seek_hole": false, 00:04:59.112 "seek_data": false, 00:04:59.112 "copy": true, 00:04:59.112 "nvme_iov_md": false 00:04:59.112 }, 00:04:59.112 "memory_domains": [ 00:04:59.112 { 00:04:59.112 "dma_device_id": "system", 00:04:59.112 "dma_device_type": 1 00:04:59.112 }, 00:04:59.112 { 00:04:59.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.112 "dma_device_type": 2 00:04:59.112 } 00:04:59.112 ], 00:04:59.112 "driver_specific": {} 00:04:59.112 }, 00:04:59.112 { 00:04:59.112 "name": "Passthru0", 00:04:59.112 "aliases": [ 00:04:59.112 "0576e8f4-f0a3-5e17-89fa-958a4c6d1c7c" 00:04:59.112 ], 00:04:59.112 "product_name": "passthru", 00:04:59.112 "block_size": 512, 00:04:59.112 "num_blocks": 16384, 00:04:59.112 "uuid": "0576e8f4-f0a3-5e17-89fa-958a4c6d1c7c", 00:04:59.112 "assigned_rate_limits": { 00:04:59.112 "rw_ios_per_sec": 0, 00:04:59.112 "rw_mbytes_per_sec": 0, 00:04:59.112 "r_mbytes_per_sec": 0, 00:04:59.112 "w_mbytes_per_sec": 0 00:04:59.112 }, 00:04:59.112 "claimed": false, 00:04:59.112 "zoned": false, 00:04:59.112 "supported_io_types": { 00:04:59.112 "read": true, 00:04:59.112 "write": true, 00:04:59.112 "unmap": true, 00:04:59.112 "flush": true, 00:04:59.112 "reset": true, 00:04:59.112 "nvme_admin": false, 00:04:59.112 "nvme_io": false, 00:04:59.112 "nvme_io_md": false, 00:04:59.112 "write_zeroes": true, 00:04:59.112 "zcopy": true, 00:04:59.112 "get_zone_info": false, 00:04:59.112 "zone_management": false, 00:04:59.112 "zone_append": false, 00:04:59.112 "compare": false, 00:04:59.112 "compare_and_write": false, 00:04:59.112 "abort": true, 00:04:59.112 "seek_hole": false, 00:04:59.112 "seek_data": false, 00:04:59.112 "copy": true, 00:04:59.112 "nvme_iov_md": false 00:04:59.112 }, 00:04:59.112 "memory_domains": [ 00:04:59.112 { 00:04:59.112 "dma_device_id": "system", 00:04:59.112 "dma_device_type": 1 00:04:59.112 }, 00:04:59.112 { 00:04:59.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.112 "dma_device_type": 2 00:04:59.112 } 00:04:59.112 ], 00:04:59.112 "driver_specific": { 00:04:59.112 "passthru": { 00:04:59.112 "name": "Passthru0", 00:04:59.112 "base_bdev_name": "Malloc2" 00:04:59.112 } 00:04:59.112 } 00:04:59.112 } 00:04:59.112 ]' 00:04:59.112 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.371 00:04:59.371 real 0m0.366s 00:04:59.371 user 0m0.231s 00:04:59.371 sys 0m0.042s 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.371 17:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.371 ************************************ 00:04:59.371 END TEST rpc_daemon_integrity 00:04:59.371 ************************************ 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.371 17:09:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.371 17:09:18 rpc -- rpc/rpc.sh@84 -- # killprocess 58964 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@948 -- # '[' -z 58964 ']' 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@952 -- # kill -0 58964 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@953 -- # uname 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58964 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.371 killing process with pid 58964 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58964' 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@967 -- # kill 58964 00:04:59.371 17:09:18 rpc -- common/autotest_common.sh@972 -- # wait 58964 00:05:01.926 ************************************ 00:05:01.926 END TEST rpc 00:05:01.926 ************************************ 00:05:01.926 00:05:01.926 real 0m5.512s 00:05:01.926 user 0m6.185s 00:05:01.926 sys 0m0.899s 00:05:01.926 17:09:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.926 17:09:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.926 17:09:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.926 17:09:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.926 17:09:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.926 17:09:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.926 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:01.926 ************************************ 00:05:01.926 START TEST skip_rpc 00:05:01.926 ************************************ 00:05:01.926 17:09:20 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.926 * Looking for test storage... 00:05:01.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.926 17:09:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.926 17:09:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.926 17:09:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.926 17:09:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.926 17:09:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.926 17:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.926 ************************************ 00:05:01.926 START TEST skip_rpc 00:05:01.926 ************************************ 00:05:01.926 17:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:01.926 17:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59196 00:05:01.926 17:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.926 17:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.926 17:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.185 [2024-07-22 17:09:21.027229] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:02.185 [2024-07-22 17:09:21.028120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59196 ] 00:05:02.443 [2024-07-22 17:09:21.209664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.701 [2024-07-22 17:09:21.511378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59196 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59196 ']' 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59196 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.968 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59196 00:05:07.969 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.969 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.969 killing process with pid 59196 00:05:07.969 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59196' 00:05:07.969 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59196 00:05:07.969 17:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59196 00:05:09.344 00:05:09.344 real 0m7.349s 00:05:09.344 user 0m6.763s 00:05:09.344 sys 0m0.470s 00:05:09.344 17:09:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.344 ************************************ 00:05:09.344 END TEST skip_rpc 00:05:09.344 ************************************ 00:05:09.344 17:09:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.344 17:09:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.344 17:09:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.344 17:09:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.344 17:09:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.344 17:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.344 ************************************ 00:05:09.344 START TEST skip_rpc_with_json 00:05:09.344 ************************************ 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59300 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59300 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59300 ']' 00:05:09.344 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.345 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.345 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.345 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.345 17:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.603 [2024-07-22 17:09:28.430048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:09.603 [2024-07-22 17:09:28.430658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59300 ] 00:05:09.861 [2024-07-22 17:09:28.638678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.120 [2024-07-22 17:09:28.914206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.057 [2024-07-22 17:09:29.759210] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.057 request: 00:05:11.057 { 00:05:11.057 "trtype": "tcp", 00:05:11.057 "method": "nvmf_get_transports", 00:05:11.057 "req_id": 1 00:05:11.057 } 00:05:11.057 Got JSON-RPC error response 00:05:11.057 response: 00:05:11.057 { 00:05:11.057 "code": -19, 00:05:11.057 "message": "No such device" 00:05:11.057 } 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.057 [2024-07-22 17:09:29.767375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.057 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.057 { 00:05:11.057 "subsystems": [ 00:05:11.057 { 00:05:11.057 "subsystem": "keyring", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "iobuf", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "iobuf_set_options", 00:05:11.057 "params": { 00:05:11.057 "small_pool_count": 8192, 00:05:11.057 "large_pool_count": 1024, 00:05:11.057 "small_bufsize": 8192, 00:05:11.057 "large_bufsize": 135168 00:05:11.057 } 00:05:11.057 } 00:05:11.057 ] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "sock", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "sock_set_default_impl", 00:05:11.057 "params": { 00:05:11.057 "impl_name": "posix" 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "sock_impl_set_options", 00:05:11.057 "params": { 00:05:11.057 "impl_name": "ssl", 00:05:11.057 "recv_buf_size": 4096, 00:05:11.057 "send_buf_size": 4096, 00:05:11.057 "enable_recv_pipe": true, 00:05:11.057 "enable_quickack": false, 00:05:11.057 "enable_placement_id": 0, 00:05:11.057 "enable_zerocopy_send_server": true, 00:05:11.057 "enable_zerocopy_send_client": false, 00:05:11.057 "zerocopy_threshold": 0, 00:05:11.057 "tls_version": 0, 00:05:11.057 "enable_ktls": false 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "sock_impl_set_options", 00:05:11.057 "params": { 00:05:11.057 "impl_name": "posix", 00:05:11.057 "recv_buf_size": 2097152, 00:05:11.057 "send_buf_size": 2097152, 00:05:11.057 "enable_recv_pipe": true, 00:05:11.057 "enable_quickack": false, 00:05:11.057 "enable_placement_id": 0, 00:05:11.057 "enable_zerocopy_send_server": true, 00:05:11.057 "enable_zerocopy_send_client": false, 00:05:11.057 "zerocopy_threshold": 0, 00:05:11.057 "tls_version": 0, 00:05:11.057 "enable_ktls": false 00:05:11.057 } 00:05:11.057 } 00:05:11.057 ] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "vmd", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "accel", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "accel_set_options", 00:05:11.057 "params": { 00:05:11.057 "small_cache_size": 128, 00:05:11.057 "large_cache_size": 16, 00:05:11.057 "task_count": 2048, 00:05:11.057 "sequence_count": 2048, 00:05:11.057 "buf_count": 2048 00:05:11.057 } 00:05:11.057 } 00:05:11.057 ] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "bdev", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "bdev_set_options", 00:05:11.057 "params": { 00:05:11.057 "bdev_io_pool_size": 65535, 00:05:11.057 "bdev_io_cache_size": 256, 00:05:11.057 "bdev_auto_examine": true, 00:05:11.057 "iobuf_small_cache_size": 128, 00:05:11.057 "iobuf_large_cache_size": 16 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "bdev_raid_set_options", 00:05:11.057 "params": { 00:05:11.057 "process_window_size_kb": 1024, 00:05:11.057 "process_max_bandwidth_mb_sec": 0 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "bdev_iscsi_set_options", 00:05:11.057 "params": { 00:05:11.057 "timeout_sec": 30 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "bdev_nvme_set_options", 00:05:11.057 "params": { 00:05:11.057 "action_on_timeout": "none", 00:05:11.057 "timeout_us": 0, 00:05:11.057 "timeout_admin_us": 0, 00:05:11.057 "keep_alive_timeout_ms": 10000, 00:05:11.057 "arbitration_burst": 0, 00:05:11.057 "low_priority_weight": 0, 00:05:11.057 "medium_priority_weight": 0, 00:05:11.057 "high_priority_weight": 0, 00:05:11.057 "nvme_adminq_poll_period_us": 10000, 00:05:11.057 "nvme_ioq_poll_period_us": 0, 00:05:11.057 "io_queue_requests": 0, 00:05:11.057 "delay_cmd_submit": true, 00:05:11.057 "transport_retry_count": 4, 00:05:11.057 "bdev_retry_count": 3, 00:05:11.057 "transport_ack_timeout": 0, 00:05:11.057 "ctrlr_loss_timeout_sec": 0, 00:05:11.057 "reconnect_delay_sec": 0, 00:05:11.057 "fast_io_fail_timeout_sec": 0, 00:05:11.057 "disable_auto_failback": false, 00:05:11.057 "generate_uuids": false, 00:05:11.057 "transport_tos": 0, 00:05:11.057 "nvme_error_stat": false, 00:05:11.057 "rdma_srq_size": 0, 00:05:11.057 "io_path_stat": false, 00:05:11.057 "allow_accel_sequence": false, 00:05:11.057 "rdma_max_cq_size": 0, 00:05:11.057 "rdma_cm_event_timeout_ms": 0, 00:05:11.057 "dhchap_digests": [ 00:05:11.057 "sha256", 00:05:11.057 "sha384", 00:05:11.057 "sha512" 00:05:11.057 ], 00:05:11.057 "dhchap_dhgroups": [ 00:05:11.057 "null", 00:05:11.057 "ffdhe2048", 00:05:11.057 "ffdhe3072", 00:05:11.057 "ffdhe4096", 00:05:11.057 "ffdhe6144", 00:05:11.057 "ffdhe8192" 00:05:11.057 ] 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "bdev_nvme_set_hotplug", 00:05:11.057 "params": { 00:05:11.057 "period_us": 100000, 00:05:11.057 "enable": false 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "bdev_wait_for_examine" 00:05:11.057 } 00:05:11.057 ] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "scsi", 00:05:11.057 "config": null 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "scheduler", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "framework_set_scheduler", 00:05:11.057 "params": { 00:05:11.057 "name": "static" 00:05:11.057 } 00:05:11.057 } 00:05:11.057 ] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "vhost_scsi", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "vhost_blk", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "ublk", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "nbd", 00:05:11.057 "config": [] 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "subsystem": "nvmf", 00:05:11.057 "config": [ 00:05:11.057 { 00:05:11.057 "method": "nvmf_set_config", 00:05:11.057 "params": { 00:05:11.057 "discovery_filter": "match_any", 00:05:11.057 "admin_cmd_passthru": { 00:05:11.057 "identify_ctrlr": false 00:05:11.057 } 00:05:11.057 } 00:05:11.057 }, 00:05:11.057 { 00:05:11.057 "method": "nvmf_set_max_subsystems", 00:05:11.058 "params": { 00:05:11.058 "max_subsystems": 1024 00:05:11.058 } 00:05:11.058 }, 00:05:11.058 { 00:05:11.058 "method": "nvmf_set_crdt", 00:05:11.058 "params": { 00:05:11.058 "crdt1": 0, 00:05:11.058 "crdt2": 0, 00:05:11.058 "crdt3": 0 00:05:11.058 } 00:05:11.058 }, 00:05:11.058 { 00:05:11.058 "method": "nvmf_create_transport", 00:05:11.058 "params": { 00:05:11.058 "trtype": "TCP", 00:05:11.058 "max_queue_depth": 128, 00:05:11.058 "max_io_qpairs_per_ctrlr": 127, 00:05:11.058 "in_capsule_data_size": 4096, 00:05:11.058 "max_io_size": 131072, 00:05:11.058 "io_unit_size": 131072, 00:05:11.058 "max_aq_depth": 128, 00:05:11.058 "num_shared_buffers": 511, 00:05:11.058 "buf_cache_size": 4294967295, 00:05:11.058 "dif_insert_or_strip": false, 00:05:11.058 "zcopy": false, 00:05:11.058 "c2h_success": true, 00:05:11.058 "sock_priority": 0, 00:05:11.058 "abort_timeout_sec": 1, 00:05:11.058 "ack_timeout": 0, 00:05:11.058 "data_wr_pool_size": 0 00:05:11.058 } 00:05:11.058 } 00:05:11.058 ] 00:05:11.058 }, 00:05:11.058 { 00:05:11.058 "subsystem": "iscsi", 00:05:11.058 "config": [ 00:05:11.058 { 00:05:11.058 "method": "iscsi_set_options", 00:05:11.058 "params": { 00:05:11.058 "node_base": "iqn.2016-06.io.spdk", 00:05:11.058 "max_sessions": 128, 00:05:11.058 "max_connections_per_session": 2, 00:05:11.058 "max_queue_depth": 64, 00:05:11.058 "default_time2wait": 2, 00:05:11.058 "default_time2retain": 20, 00:05:11.058 "first_burst_length": 8192, 00:05:11.058 "immediate_data": true, 00:05:11.058 "allow_duplicated_isid": false, 00:05:11.058 "error_recovery_level": 0, 00:05:11.058 "nop_timeout": 60, 00:05:11.058 "nop_in_interval": 30, 00:05:11.058 "disable_chap": false, 00:05:11.058 "require_chap": false, 00:05:11.058 "mutual_chap": false, 00:05:11.058 "chap_group": 0, 00:05:11.058 "max_large_datain_per_connection": 64, 00:05:11.058 "max_r2t_per_connection": 4, 00:05:11.058 "pdu_pool_size": 36864, 00:05:11.058 "immediate_data_pool_size": 16384, 00:05:11.058 "data_out_pool_size": 2048 00:05:11.058 } 00:05:11.058 } 00:05:11.058 ] 00:05:11.058 } 00:05:11.058 ] 00:05:11.058 } 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59300 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59300 ']' 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59300 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59300 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59300' 00:05:11.058 killing process with pid 59300 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59300 00:05:11.058 17:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59300 00:05:13.590 17:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59356 00:05:13.590 17:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.590 17:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59356 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59356 ']' 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59356 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59356 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59356' 00:05:18.863 killing process with pid 59356 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59356 00:05:18.863 17:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59356 00:05:20.772 17:09:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:20.772 17:09:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.031 00:05:21.031 real 0m11.463s 00:05:21.031 user 0m10.749s 00:05:21.031 sys 0m1.055s 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.031 ************************************ 00:05:21.031 END TEST skip_rpc_with_json 00:05:21.031 ************************************ 00:05:21.031 17:09:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.031 17:09:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:21.031 17:09:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.031 17:09:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.031 17:09:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.031 ************************************ 00:05:21.031 START TEST skip_rpc_with_delay 00:05:21.031 ************************************ 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:21.031 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.031 [2024-07-22 17:09:39.924912] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.031 [2024-07-22 17:09:39.925093] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.290 00:05:21.290 real 0m0.210s 00:05:21.290 user 0m0.119s 00:05:21.290 sys 0m0.089s 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.290 17:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:21.290 ************************************ 00:05:21.290 END TEST skip_rpc_with_delay 00:05:21.290 ************************************ 00:05:21.290 17:09:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.290 17:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.290 17:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.290 17:09:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.290 17:09:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.290 17:09:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.290 17:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.290 ************************************ 00:05:21.290 START TEST exit_on_failed_rpc_init 00:05:21.290 ************************************ 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59484 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59484 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59484 ']' 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.290 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.291 17:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.291 [2024-07-22 17:09:40.223062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:21.291 [2024-07-22 17:09:40.223285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:05:21.549 [2024-07-22 17:09:40.397652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.807 [2024-07-22 17:09:40.697798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.742 17:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.742 [2024-07-22 17:09:41.666001] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:22.742 [2024-07-22 17:09:41.666170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:05:23.000 [2024-07-22 17:09:41.831427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.258 [2024-07-22 17:09:42.130619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.258 [2024-07-22 17:09:42.130756] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.258 [2024-07-22 17:09:42.130785] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.258 [2024-07-22 17:09:42.130800] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59484 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59484 ']' 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59484 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59484 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.824 killing process with pid 59484 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59484' 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59484 00:05:23.824 17:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59484 00:05:26.377 00:05:26.377 real 0m4.879s 00:05:26.377 user 0m5.620s 00:05:26.377 sys 0m0.686s 00:05:26.377 17:09:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.377 17:09:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.377 ************************************ 00:05:26.377 END TEST exit_on_failed_rpc_init 00:05:26.377 ************************************ 00:05:26.377 17:09:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.377 17:09:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.377 00:05:26.377 real 0m24.212s 00:05:26.377 user 0m23.363s 00:05:26.377 sys 0m2.482s 00:05:26.377 17:09:44 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.377 17:09:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.377 ************************************ 00:05:26.377 END TEST skip_rpc 00:05:26.377 ************************************ 00:05:26.377 17:09:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.377 17:09:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.377 17:09:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.377 17:09:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.377 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.377 ************************************ 00:05:26.377 START TEST rpc_client 00:05:26.377 ************************************ 00:05:26.377 17:09:45 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.377 * Looking for test storage... 00:05:26.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:26.377 17:09:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:26.377 OK 00:05:26.377 17:09:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.377 00:05:26.377 real 0m0.153s 00:05:26.377 user 0m0.071s 00:05:26.377 sys 0m0.087s 00:05:26.377 17:09:45 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.378 17:09:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.378 ************************************ 00:05:26.378 END TEST rpc_client 00:05:26.378 ************************************ 00:05:26.378 17:09:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.378 17:09:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.378 17:09:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.378 17:09:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.378 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.378 ************************************ 00:05:26.378 START TEST json_config 00:05:26.378 ************************************ 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c61f564-1952-48f3-b7d3-94aa342140a5 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5c61f564-1952-48f3-b7d3-94aa342140a5 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.378 17:09:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.378 17:09:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.378 17:09:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.378 17:09:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.378 17:09:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.378 17:09:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.378 17:09:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.378 17:09:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.378 17:09:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:05:26.378 17:09:45 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.378 INFO: JSON configuration test init 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.378 17:09:45 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.378 17:09:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.378 17:09:45 json_config -- json_config/common.sh@10 -- # shift 00:05:26.378 17:09:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.378 17:09:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.378 17:09:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.378 17:09:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.378 17:09:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.378 17:09:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59662 00:05:26.378 Waiting for target to run... 00:05:26.378 17:09:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.378 17:09:45 json_config -- json_config/common.sh@25 -- # waitforlisten 59662 /var/tmp/spdk_tgt.sock 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 59662 ']' 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.378 17:09:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.378 17:09:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.637 [2024-07-22 17:09:45.488100] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:26.637 [2024-07-22 17:09:45.488306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:05:27.203 [2024-07-22 17:09:45.954143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.461 [2024-07-22 17:09:46.208800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.461 00:05:27.461 17:09:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.461 17:09:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.461 17:09:46 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:27.461 17:09:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.836 17:09:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@51 -- # sort 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.836 17:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.836 17:09:47 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:28.836 17:09:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:29.093 MallocForIscsi0 00:05:29.351 17:09:48 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:05:29.351 17:09:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:05:29.351 17:09:48 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:29.351 17:09:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:29.916 17:09:48 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:29.916 17:09:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:29.916 17:09:48 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:05:29.916 17:09:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.916 17:09:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.175 17:09:48 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:05:30.175 17:09:48 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:30.175 17:09:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.175 17:09:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.175 17:09:48 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:30.175 17:09:48 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.175 17:09:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.433 MallocBdevForConfigChangeCheck 00:05:30.433 17:09:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:30.433 17:09:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.433 17:09:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.433 17:09:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:30.433 17:09:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.692 INFO: shutting down applications... 00:05:30.692 17:09:49 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:30.692 17:09:49 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:30.692 17:09:49 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:30.692 17:09:49 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:30.692 17:09:49 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:31.258 Calling clear_iscsi_subsystem 00:05:31.258 Calling clear_nvmf_subsystem 00:05:31.258 Calling clear_nbd_subsystem 00:05:31.258 Calling clear_ublk_subsystem 00:05:31.258 Calling clear_vhost_blk_subsystem 00:05:31.258 Calling clear_vhost_scsi_subsystem 00:05:31.258 Calling clear_bdev_subsystem 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:31.258 17:09:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.516 17:09:50 json_config -- json_config/json_config.sh@349 -- # break 00:05:31.516 17:09:50 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:31.516 17:09:50 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:31.516 17:09:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:31.516 17:09:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.516 17:09:50 json_config -- json_config/common.sh@35 -- # [[ -n 59662 ]] 00:05:31.516 17:09:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59662 00:05:31.516 17:09:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.516 17:09:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.516 17:09:50 json_config -- json_config/common.sh@41 -- # kill -0 59662 00:05:31.516 17:09:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.086 17:09:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.086 17:09:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.086 17:09:50 json_config -- json_config/common.sh@41 -- # kill -0 59662 00:05:32.086 17:09:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.666 17:09:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.666 17:09:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.666 17:09:51 json_config -- json_config/common.sh@41 -- # kill -0 59662 00:05:32.666 17:09:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.666 17:09:51 json_config -- json_config/common.sh@43 -- # break 00:05:32.666 17:09:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.666 SPDK target shutdown done 00:05:32.666 17:09:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.666 INFO: relaunching applications... 00:05:32.666 17:09:51 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:32.666 17:09:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.666 17:09:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.666 17:09:51 json_config -- json_config/common.sh@10 -- # shift 00:05:32.666 17:09:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.666 17:09:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.666 17:09:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.666 17:09:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.666 17:09:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.666 17:09:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59862 00:05:32.666 17:09:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.666 Waiting for target to run... 00:05:32.666 17:09:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.666 17:09:51 json_config -- json_config/common.sh@25 -- # waitforlisten 59862 /var/tmp/spdk_tgt.sock 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 59862 ']' 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.666 17:09:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.666 [2024-07-22 17:09:51.573711] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:32.666 [2024-07-22 17:09:51.573934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:05:33.233 [2024-07-22 17:09:52.063936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.492 [2024-07-22 17:09:52.307922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.426 17:09:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.426 17:09:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:34.426 17:09:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.426 00:05:34.426 17:09:53 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:34.426 INFO: Checking if target configuration is the same... 00:05:34.426 17:09:53 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:34.426 17:09:53 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.426 17:09:53 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:34.426 17:09:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.426 + '[' 2 -ne 2 ']' 00:05:34.427 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:34.427 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:34.427 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:34.427 +++ basename /dev/fd/62 00:05:34.427 ++ mktemp /tmp/62.XXX 00:05:34.427 + tmp_file_1=/tmp/62.YdX 00:05:34.427 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.427 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.427 + tmp_file_2=/tmp/spdk_tgt_config.json.lsf 00:05:34.427 + ret=0 00:05:34.427 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.993 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.993 + diff -u /tmp/62.YdX /tmp/spdk_tgt_config.json.lsf 00:05:34.993 + echo 'INFO: JSON config files are the same' 00:05:34.993 INFO: JSON config files are the same 00:05:34.993 + rm /tmp/62.YdX /tmp/spdk_tgt_config.json.lsf 00:05:34.993 + exit 0 00:05:34.993 17:09:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:34.993 17:09:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:34.993 INFO: changing configuration and checking if this can be detected... 00:05:34.993 17:09:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.993 17:09:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:35.254 17:09:54 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.254 17:09:54 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:35.254 17:09:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.254 + '[' 2 -ne 2 ']' 00:05:35.254 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:35.254 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:35.254 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:35.254 +++ basename /dev/fd/62 00:05:35.254 ++ mktemp /tmp/62.XXX 00:05:35.254 + tmp_file_1=/tmp/62.Iby 00:05:35.254 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.254 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:35.254 + tmp_file_2=/tmp/spdk_tgt_config.json.Ucf 00:05:35.254 + ret=0 00:05:35.254 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.820 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.820 + diff -u /tmp/62.Iby /tmp/spdk_tgt_config.json.Ucf 00:05:35.820 + ret=1 00:05:35.820 + echo '=== Start of file: /tmp/62.Iby ===' 00:05:35.820 + cat /tmp/62.Iby 00:05:35.820 + echo '=== End of file: /tmp/62.Iby ===' 00:05:35.820 + echo '' 00:05:35.820 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ucf ===' 00:05:35.820 + cat /tmp/spdk_tgt_config.json.Ucf 00:05:35.820 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ucf ===' 00:05:35.820 + echo '' 00:05:35.820 + rm /tmp/62.Iby /tmp/spdk_tgt_config.json.Ucf 00:05:35.820 + exit 1 00:05:35.820 INFO: configuration change detected. 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 59862 ]] 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:05:35.820 17:09:54 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@1031 -- # hash ceph 00:05:35.820 17:09:54 json_config -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:05:35.820 + base_dir=/var/tmp/ceph 00:05:35.820 + image=/var/tmp/ceph/ceph_raw.img 00:05:35.821 + dev=/dev/loop200 00:05:35.821 + pkill -9 ceph 00:05:35.821 + sleep 3 00:05:39.106 + umount /dev/loop200p2 00:05:39.106 umount: /dev/loop200p2: no mount point specified. 00:05:39.106 + losetup -d /dev/loop200 00:05:39.106 losetup: /dev/loop200: failed to use device: No such device 00:05:39.106 + rm -rf /var/tmp/ceph 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:05:39.106 17:09:57 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.106 17:09:57 json_config -- json_config/json_config.sh@327 -- # killprocess 59862 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@948 -- # '[' -z 59862 ']' 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@952 -- # kill -0 59862 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@953 -- # uname 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59862 00:05:39.106 killing process with pid 59862 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59862' 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@967 -- # kill 59862 00:05:39.106 17:09:57 json_config -- common/autotest_common.sh@972 -- # wait 59862 00:05:40.043 17:09:58 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.043 17:09:58 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:40.043 17:09:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.043 17:09:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.043 INFO: Success 00:05:40.043 17:09:58 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:40.043 17:09:58 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:40.043 00:05:40.043 real 0m13.512s 00:05:40.043 user 0m16.407s 00:05:40.043 sys 0m2.075s 00:05:40.043 17:09:58 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.043 ************************************ 00:05:40.043 END TEST json_config 00:05:40.043 ************************************ 00:05:40.043 17:09:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.043 17:09:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.043 17:09:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.043 17:09:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.043 17:09:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.043 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.043 ************************************ 00:05:40.043 START TEST json_config_extra_key 00:05:40.043 ************************************ 00:05:40.043 17:09:58 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c61f564-1952-48f3-b7d3-94aa342140a5 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5c61f564-1952-48f3-b7d3-94aa342140a5 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.043 17:09:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.043 17:09:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.043 17:09:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.043 17:09:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.043 17:09:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.043 17:09:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.043 17:09:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:40.043 17:09:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.043 17:09:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.043 INFO: launching applications... 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.043 17:09:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.043 Waiting for target to run... 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60060 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.043 17:09:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60060 /var/tmp/spdk_tgt.sock 00:05:40.044 17:09:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60060 ']' 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.044 17:09:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.302 [2024-07-22 17:09:59.011389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:40.302 [2024-07-22 17:09:59.011586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:05:40.561 [2024-07-22 17:09:59.471144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.820 [2024-07-22 17:09:59.703732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.755 00:05:41.755 INFO: shutting down applications... 00:05:41.755 17:10:00 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.755 17:10:00 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.755 17:10:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.755 17:10:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60060 ]] 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60060 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:41.755 17:10:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.014 17:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.014 17:10:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.014 17:10:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:42.014 17:10:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.581 17:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.581 17:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.581 17:10:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:42.581 17:10:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.148 17:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.148 17:10:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.148 17:10:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:43.148 17:10:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.716 17:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.716 17:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.716 17:10:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:43.716 17:10:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.974 17:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.974 17:10:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.974 17:10:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:43.974 17:10:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60060 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.548 SPDK target shutdown done 00:05:44.548 Success 00:05:44.548 17:10:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.548 17:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.548 00:05:44.548 real 0m4.592s 00:05:44.548 user 0m3.936s 00:05:44.548 sys 0m0.618s 00:05:44.548 ************************************ 00:05:44.548 END TEST json_config_extra_key 00:05:44.548 ************************************ 00:05:44.548 17:10:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.548 17:10:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 17:10:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.548 17:10:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.548 17:10:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.548 17:10:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.548 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 ************************************ 00:05:44.548 START TEST alias_rpc 00:05:44.548 ************************************ 00:05:44.548 17:10:03 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.807 * Looking for test storage... 00:05:44.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:44.807 17:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.807 17:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60163 00:05:44.807 17:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60163 00:05:44.807 17:10:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60163 ']' 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.807 17:10:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.807 [2024-07-22 17:10:03.675638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:44.807 [2024-07-22 17:10:03.675867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:05:45.095 [2024-07-22 17:10:03.852400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.357 [2024-07-22 17:10:04.127600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.293 17:10:04 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.293 17:10:04 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.293 17:10:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:46.552 17:10:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60163 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60163 ']' 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60163 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60163 00:05:46.552 killing process with pid 60163 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60163' 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@967 -- # kill 60163 00:05:46.552 17:10:05 alias_rpc -- common/autotest_common.sh@972 -- # wait 60163 00:05:49.082 ************************************ 00:05:49.082 END TEST alias_rpc 00:05:49.082 ************************************ 00:05:49.082 00:05:49.082 real 0m4.125s 00:05:49.082 user 0m4.167s 00:05:49.082 sys 0m0.642s 00:05:49.082 17:10:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.082 17:10:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.082 17:10:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.082 17:10:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.082 17:10:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.082 17:10:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.082 17:10:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.082 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:49.082 ************************************ 00:05:49.082 START TEST spdkcli_tcp 00:05:49.082 ************************************ 00:05:49.082 17:10:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.082 * Looking for test storage... 00:05:49.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:49.082 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60266 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60266 00:05:49.083 17:10:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60266 ']' 00:05:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.083 17:10:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.083 [2024-07-22 17:10:07.860161] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:49.083 [2024-07-22 17:10:07.860372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60266 ] 00:05:49.341 [2024-07-22 17:10:08.038369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.599 [2024-07-22 17:10:08.326423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.599 [2024-07-22 17:10:08.326445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.534 17:10:09 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.534 17:10:09 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:50.534 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60289 00:05:50.534 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.534 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.534 [ 00:05:50.534 "bdev_malloc_delete", 00:05:50.534 "bdev_malloc_create", 00:05:50.534 "bdev_null_resize", 00:05:50.534 "bdev_null_delete", 00:05:50.534 "bdev_null_create", 00:05:50.534 "bdev_nvme_cuse_unregister", 00:05:50.534 "bdev_nvme_cuse_register", 00:05:50.534 "bdev_opal_new_user", 00:05:50.534 "bdev_opal_set_lock_state", 00:05:50.534 "bdev_opal_delete", 00:05:50.534 "bdev_opal_get_info", 00:05:50.534 "bdev_opal_create", 00:05:50.534 "bdev_nvme_opal_revert", 00:05:50.534 "bdev_nvme_opal_init", 00:05:50.534 "bdev_nvme_send_cmd", 00:05:50.534 "bdev_nvme_get_path_iostat", 00:05:50.534 "bdev_nvme_get_mdns_discovery_info", 00:05:50.534 "bdev_nvme_stop_mdns_discovery", 00:05:50.534 "bdev_nvme_start_mdns_discovery", 00:05:50.534 "bdev_nvme_set_multipath_policy", 00:05:50.534 "bdev_nvme_set_preferred_path", 00:05:50.534 "bdev_nvme_get_io_paths", 00:05:50.534 "bdev_nvme_remove_error_injection", 00:05:50.534 "bdev_nvme_add_error_injection", 00:05:50.534 "bdev_nvme_get_discovery_info", 00:05:50.534 "bdev_nvme_stop_discovery", 00:05:50.534 "bdev_nvme_start_discovery", 00:05:50.534 "bdev_nvme_get_controller_health_info", 00:05:50.534 "bdev_nvme_disable_controller", 00:05:50.534 "bdev_nvme_enable_controller", 00:05:50.534 "bdev_nvme_reset_controller", 00:05:50.534 "bdev_nvme_get_transport_statistics", 00:05:50.534 "bdev_nvme_apply_firmware", 00:05:50.534 "bdev_nvme_detach_controller", 00:05:50.534 "bdev_nvme_get_controllers", 00:05:50.534 "bdev_nvme_attach_controller", 00:05:50.534 "bdev_nvme_set_hotplug", 00:05:50.534 "bdev_nvme_set_options", 00:05:50.534 "bdev_passthru_delete", 00:05:50.534 "bdev_passthru_create", 00:05:50.534 "bdev_lvol_set_parent_bdev", 00:05:50.534 "bdev_lvol_set_parent", 00:05:50.534 "bdev_lvol_check_shallow_copy", 00:05:50.534 "bdev_lvol_start_shallow_copy", 00:05:50.534 "bdev_lvol_grow_lvstore", 00:05:50.534 "bdev_lvol_get_lvols", 00:05:50.534 "bdev_lvol_get_lvstores", 00:05:50.534 "bdev_lvol_delete", 00:05:50.534 "bdev_lvol_set_read_only", 00:05:50.534 "bdev_lvol_resize", 00:05:50.534 "bdev_lvol_decouple_parent", 00:05:50.534 "bdev_lvol_inflate", 00:05:50.534 "bdev_lvol_rename", 00:05:50.534 "bdev_lvol_clone_bdev", 00:05:50.534 "bdev_lvol_clone", 00:05:50.534 "bdev_lvol_snapshot", 00:05:50.534 "bdev_lvol_create", 00:05:50.534 "bdev_lvol_delete_lvstore", 00:05:50.534 "bdev_lvol_rename_lvstore", 00:05:50.534 "bdev_lvol_create_lvstore", 00:05:50.534 "bdev_raid_set_options", 00:05:50.534 "bdev_raid_remove_base_bdev", 00:05:50.534 "bdev_raid_add_base_bdev", 00:05:50.534 "bdev_raid_delete", 00:05:50.534 "bdev_raid_create", 00:05:50.534 "bdev_raid_get_bdevs", 00:05:50.534 "bdev_error_inject_error", 00:05:50.534 "bdev_error_delete", 00:05:50.534 "bdev_error_create", 00:05:50.534 "bdev_split_delete", 00:05:50.534 "bdev_split_create", 00:05:50.534 "bdev_delay_delete", 00:05:50.534 "bdev_delay_create", 00:05:50.534 "bdev_delay_update_latency", 00:05:50.534 "bdev_zone_block_delete", 00:05:50.534 "bdev_zone_block_create", 00:05:50.534 "blobfs_create", 00:05:50.534 "blobfs_detect", 00:05:50.534 "blobfs_set_cache_size", 00:05:50.534 "bdev_aio_delete", 00:05:50.534 "bdev_aio_rescan", 00:05:50.534 "bdev_aio_create", 00:05:50.534 "bdev_ftl_set_property", 00:05:50.534 "bdev_ftl_get_properties", 00:05:50.534 "bdev_ftl_get_stats", 00:05:50.534 "bdev_ftl_unmap", 00:05:50.534 "bdev_ftl_unload", 00:05:50.534 "bdev_ftl_delete", 00:05:50.534 "bdev_ftl_load", 00:05:50.534 "bdev_ftl_create", 00:05:50.534 "bdev_virtio_attach_controller", 00:05:50.534 "bdev_virtio_scsi_get_devices", 00:05:50.534 "bdev_virtio_detach_controller", 00:05:50.534 "bdev_virtio_blk_set_hotplug", 00:05:50.534 "bdev_iscsi_delete", 00:05:50.534 "bdev_iscsi_create", 00:05:50.534 "bdev_iscsi_set_options", 00:05:50.534 "bdev_rbd_get_clusters_info", 00:05:50.534 "bdev_rbd_unregister_cluster", 00:05:50.534 "bdev_rbd_register_cluster", 00:05:50.534 "bdev_rbd_resize", 00:05:50.534 "bdev_rbd_delete", 00:05:50.534 "bdev_rbd_create", 00:05:50.534 "accel_error_inject_error", 00:05:50.534 "ioat_scan_accel_module", 00:05:50.534 "dsa_scan_accel_module", 00:05:50.534 "iaa_scan_accel_module", 00:05:50.534 "keyring_file_remove_key", 00:05:50.534 "keyring_file_add_key", 00:05:50.534 "keyring_linux_set_options", 00:05:50.534 "iscsi_get_histogram", 00:05:50.534 "iscsi_enable_histogram", 00:05:50.534 "iscsi_set_options", 00:05:50.534 "iscsi_get_auth_groups", 00:05:50.534 "iscsi_auth_group_remove_secret", 00:05:50.534 "iscsi_auth_group_add_secret", 00:05:50.534 "iscsi_delete_auth_group", 00:05:50.534 "iscsi_create_auth_group", 00:05:50.534 "iscsi_set_discovery_auth", 00:05:50.534 "iscsi_get_options", 00:05:50.534 "iscsi_target_node_request_logout", 00:05:50.534 "iscsi_target_node_set_redirect", 00:05:50.534 "iscsi_target_node_set_auth", 00:05:50.534 "iscsi_target_node_add_lun", 00:05:50.534 "iscsi_get_stats", 00:05:50.534 "iscsi_get_connections", 00:05:50.534 "iscsi_portal_group_set_auth", 00:05:50.534 "iscsi_start_portal_group", 00:05:50.534 "iscsi_delete_portal_group", 00:05:50.534 "iscsi_create_portal_group", 00:05:50.534 "iscsi_get_portal_groups", 00:05:50.534 "iscsi_delete_target_node", 00:05:50.534 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.534 "iscsi_target_node_add_pg_ig_maps", 00:05:50.534 "iscsi_create_target_node", 00:05:50.534 "iscsi_get_target_nodes", 00:05:50.534 "iscsi_delete_initiator_group", 00:05:50.534 "iscsi_initiator_group_remove_initiators", 00:05:50.534 "iscsi_initiator_group_add_initiators", 00:05:50.534 "iscsi_create_initiator_group", 00:05:50.534 "iscsi_get_initiator_groups", 00:05:50.534 "nvmf_set_crdt", 00:05:50.534 "nvmf_set_config", 00:05:50.534 "nvmf_set_max_subsystems", 00:05:50.534 "nvmf_stop_mdns_prr", 00:05:50.534 "nvmf_publish_mdns_prr", 00:05:50.534 "nvmf_subsystem_get_listeners", 00:05:50.534 "nvmf_subsystem_get_qpairs", 00:05:50.534 "nvmf_subsystem_get_controllers", 00:05:50.534 "nvmf_get_stats", 00:05:50.534 "nvmf_get_transports", 00:05:50.534 "nvmf_create_transport", 00:05:50.534 "nvmf_get_targets", 00:05:50.534 "nvmf_delete_target", 00:05:50.534 "nvmf_create_target", 00:05:50.534 "nvmf_subsystem_allow_any_host", 00:05:50.534 "nvmf_subsystem_remove_host", 00:05:50.534 "nvmf_subsystem_add_host", 00:05:50.534 "nvmf_ns_remove_host", 00:05:50.534 "nvmf_ns_add_host", 00:05:50.534 "nvmf_subsystem_remove_ns", 00:05:50.534 "nvmf_subsystem_add_ns", 00:05:50.534 "nvmf_subsystem_listener_set_ana_state", 00:05:50.534 "nvmf_discovery_get_referrals", 00:05:50.534 "nvmf_discovery_remove_referral", 00:05:50.534 "nvmf_discovery_add_referral", 00:05:50.534 "nvmf_subsystem_remove_listener", 00:05:50.534 "nvmf_subsystem_add_listener", 00:05:50.534 "nvmf_delete_subsystem", 00:05:50.534 "nvmf_create_subsystem", 00:05:50.534 "nvmf_get_subsystems", 00:05:50.534 "env_dpdk_get_mem_stats", 00:05:50.534 "nbd_get_disks", 00:05:50.534 "nbd_stop_disk", 00:05:50.534 "nbd_start_disk", 00:05:50.534 "ublk_recover_disk", 00:05:50.534 "ublk_get_disks", 00:05:50.534 "ublk_stop_disk", 00:05:50.534 "ublk_start_disk", 00:05:50.534 "ublk_destroy_target", 00:05:50.534 "ublk_create_target", 00:05:50.534 "virtio_blk_create_transport", 00:05:50.534 "virtio_blk_get_transports", 00:05:50.534 "vhost_controller_set_coalescing", 00:05:50.534 "vhost_get_controllers", 00:05:50.534 "vhost_delete_controller", 00:05:50.534 "vhost_create_blk_controller", 00:05:50.534 "vhost_scsi_controller_remove_target", 00:05:50.534 "vhost_scsi_controller_add_target", 00:05:50.534 "vhost_start_scsi_controller", 00:05:50.534 "vhost_create_scsi_controller", 00:05:50.534 "thread_set_cpumask", 00:05:50.534 "framework_get_governor", 00:05:50.534 "framework_get_scheduler", 00:05:50.534 "framework_set_scheduler", 00:05:50.534 "framework_get_reactors", 00:05:50.534 "thread_get_io_channels", 00:05:50.534 "thread_get_pollers", 00:05:50.534 "thread_get_stats", 00:05:50.534 "framework_monitor_context_switch", 00:05:50.534 "spdk_kill_instance", 00:05:50.534 "log_enable_timestamps", 00:05:50.534 "log_get_flags", 00:05:50.534 "log_clear_flag", 00:05:50.534 "log_set_flag", 00:05:50.534 "log_get_level", 00:05:50.534 "log_set_level", 00:05:50.534 "log_get_print_level", 00:05:50.534 "log_set_print_level", 00:05:50.534 "framework_enable_cpumask_locks", 00:05:50.534 "framework_disable_cpumask_locks", 00:05:50.534 "framework_wait_init", 00:05:50.534 "framework_start_init", 00:05:50.534 "scsi_get_devices", 00:05:50.535 "bdev_get_histogram", 00:05:50.535 "bdev_enable_histogram", 00:05:50.535 "bdev_set_qos_limit", 00:05:50.535 "bdev_set_qd_sampling_period", 00:05:50.535 "bdev_get_bdevs", 00:05:50.535 "bdev_reset_iostat", 00:05:50.535 "bdev_get_iostat", 00:05:50.535 "bdev_examine", 00:05:50.535 "bdev_wait_for_examine", 00:05:50.535 "bdev_set_options", 00:05:50.535 "notify_get_notifications", 00:05:50.535 "notify_get_types", 00:05:50.535 "accel_get_stats", 00:05:50.535 "accel_set_options", 00:05:50.535 "accel_set_driver", 00:05:50.535 "accel_crypto_key_destroy", 00:05:50.535 "accel_crypto_keys_get", 00:05:50.535 "accel_crypto_key_create", 00:05:50.535 "accel_assign_opc", 00:05:50.535 "accel_get_module_info", 00:05:50.535 "accel_get_opc_assignments", 00:05:50.535 "vmd_rescan", 00:05:50.535 "vmd_remove_device", 00:05:50.535 "vmd_enable", 00:05:50.535 "sock_get_default_impl", 00:05:50.535 "sock_set_default_impl", 00:05:50.535 "sock_impl_set_options", 00:05:50.535 "sock_impl_get_options", 00:05:50.535 "iobuf_get_stats", 00:05:50.535 "iobuf_set_options", 00:05:50.535 "framework_get_pci_devices", 00:05:50.535 "framework_get_config", 00:05:50.535 "framework_get_subsystems", 00:05:50.535 "trace_get_info", 00:05:50.535 "trace_get_tpoint_group_mask", 00:05:50.535 "trace_disable_tpoint_group", 00:05:50.535 "trace_enable_tpoint_group", 00:05:50.535 "trace_clear_tpoint_mask", 00:05:50.535 "trace_set_tpoint_mask", 00:05:50.535 "keyring_get_keys", 00:05:50.535 "spdk_get_version", 00:05:50.535 "rpc_get_methods" 00:05:50.535 ] 00:05:50.535 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.535 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.535 17:10:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60266 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60266 ']' 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60266 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60266 00:05:50.535 killing process with pid 60266 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60266' 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60266 00:05:50.535 17:10:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60266 00:05:53.109 ************************************ 00:05:53.109 END TEST spdkcli_tcp 00:05:53.109 ************************************ 00:05:53.109 00:05:53.109 real 0m4.240s 00:05:53.109 user 0m7.304s 00:05:53.109 sys 0m0.636s 00:05:53.109 17:10:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.109 17:10:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.109 17:10:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.109 17:10:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.109 17:10:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.109 17:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.109 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:05:53.109 ************************************ 00:05:53.109 START TEST dpdk_mem_utility 00:05:53.109 ************************************ 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.109 * Looking for test storage... 00:05:53.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:53.109 17:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.109 17:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60386 00:05:53.109 17:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60386 00:05:53.109 17:10:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60386 ']' 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.109 17:10:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.367 [2024-07-22 17:10:12.128918] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:53.367 [2024-07-22 17:10:12.129150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:05:53.367 [2024-07-22 17:10:12.304196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.625 [2024-07-22 17:10:12.564998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.559 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.559 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:54.559 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.559 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.559 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.559 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.559 { 00:05:54.559 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.559 } 00:05:54.559 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.559 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.559 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:54.559 1 heaps totaling size 820.000000 MiB 00:05:54.559 size: 820.000000 MiB heap id: 0 00:05:54.559 end heaps---------- 00:05:54.559 8 mempools totaling size 598.116089 MiB 00:05:54.559 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.559 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.559 size: 84.521057 MiB name: bdev_io_60386 00:05:54.559 size: 51.011292 MiB name: evtpool_60386 00:05:54.559 size: 50.003479 MiB name: msgpool_60386 00:05:54.559 size: 21.763794 MiB name: PDU_Pool 00:05:54.559 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.559 size: 0.026123 MiB name: Session_Pool 00:05:54.559 end mempools------- 00:05:54.559 6 memzones totaling size 4.142822 MiB 00:05:54.559 size: 1.000366 MiB name: RG_ring_0_60386 00:05:54.559 size: 1.000366 MiB name: RG_ring_1_60386 00:05:54.559 size: 1.000366 MiB name: RG_ring_4_60386 00:05:54.559 size: 1.000366 MiB name: RG_ring_5_60386 00:05:54.559 size: 0.125366 MiB name: RG_ring_2_60386 00:05:54.559 size: 0.015991 MiB name: RG_ring_3_60386 00:05:54.559 end memzones------- 00:05:54.559 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.910 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:05:54.910 list of free elements. size: 18.452026 MiB 00:05:54.910 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:54.910 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:54.910 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:54.910 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:54.910 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:54.910 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:54.910 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:54.910 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:54.910 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:54.910 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:54.910 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:54.910 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:54.910 element at address: 0x20001b000000 with size: 0.564636 MiB 00:05:54.910 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:54.910 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:54.910 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:54.910 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:54.910 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:54.910 list of standard malloc elements. size: 199.283569 MiB 00:05:54.910 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:54.910 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:54.910 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:54.910 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:54.910 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:54.910 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:54.910 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:54.910 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:54.910 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:54.910 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:54.910 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:54.910 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:54.910 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:54.910 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:54.911 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:54.911 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:54.911 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:54.911 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:54.911 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:54.912 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:54.912 list of memzone associated elements. size: 602.264404 MiB 00:05:54.912 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:54.912 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.912 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:54.912 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.912 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:54.912 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60386_0 00:05:54.912 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:54.912 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60386_0 00:05:54.912 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:54.912 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60386_0 00:05:54.912 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:54.912 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.912 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:54.912 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.912 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:54.912 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60386 00:05:54.912 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:54.912 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60386 00:05:54.912 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:54.912 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60386 00:05:54.912 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:54.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.912 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:54.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.912 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:54.912 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.912 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:54.912 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.912 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:54.912 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60386 00:05:54.912 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:54.912 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60386 00:05:54.912 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:54.912 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60386 00:05:54.912 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:54.912 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60386 00:05:54.912 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:54.912 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60386 00:05:54.912 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:54.912 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.912 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:54.912 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.912 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:54.912 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.912 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:54.912 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60386 00:05:54.912 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:54.912 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.912 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:54.912 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.912 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:54.912 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60386 00:05:54.912 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:54.912 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.912 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:54.912 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60386 00:05:54.912 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:54.912 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60386 00:05:54.912 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:54.912 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.912 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.912 17:10:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60386 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60386 ']' 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60386 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60386 00:05:54.912 killing process with pid 60386 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60386' 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60386 00:05:54.912 17:10:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60386 00:05:57.444 00:05:57.444 real 0m3.920s 00:05:57.444 user 0m3.880s 00:05:57.444 sys 0m0.603s 00:05:57.444 17:10:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.444 ************************************ 00:05:57.444 END TEST dpdk_mem_utility 00:05:57.445 ************************************ 00:05:57.445 17:10:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.445 17:10:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.445 17:10:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.445 17:10:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.445 17:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.445 17:10:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.445 ************************************ 00:05:57.445 START TEST event 00:05:57.445 ************************************ 00:05:57.445 17:10:15 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.445 * Looking for test storage... 00:05:57.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.445 17:10:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.445 17:10:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.445 17:10:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.445 17:10:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:57.445 17:10:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.445 17:10:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.445 ************************************ 00:05:57.445 START TEST event_perf 00:05:57.445 ************************************ 00:05:57.445 17:10:15 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.445 Running I/O for 1 seconds...[2024-07-22 17:10:16.008119] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:57.445 [2024-07-22 17:10:16.008292] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60486 ] 00:05:57.445 [2024-07-22 17:10:16.188014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.703 [2024-07-22 17:10:16.479565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.703 [2024-07-22 17:10:16.479718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.703 [2024-07-22 17:10:16.479829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.703 Running I/O for 1 seconds...[2024-07-22 17:10:16.480072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.105 00:05:59.105 lcore 0: 190061 00:05:59.105 lcore 1: 190056 00:05:59.105 lcore 2: 190057 00:05:59.105 lcore 3: 190058 00:05:59.105 done. 00:05:59.105 00:05:59.105 real 0m1.945s 00:05:59.105 user 0m4.666s 00:05:59.105 sys 0m0.151s 00:05:59.105 17:10:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.105 17:10:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.105 ************************************ 00:05:59.105 END TEST event_perf 00:05:59.105 ************************************ 00:05:59.105 17:10:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.105 17:10:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.105 17:10:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:59.105 17:10:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.105 17:10:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.105 ************************************ 00:05:59.105 START TEST event_reactor 00:05:59.105 ************************************ 00:05:59.105 17:10:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.105 [2024-07-22 17:10:17.986668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:59.105 [2024-07-22 17:10:17.987622] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:05:59.363 [2024-07-22 17:10:18.167152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.621 [2024-07-22 17:10:18.408154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.995 test_start 00:06:00.995 oneshot 00:06:00.995 tick 100 00:06:00.995 tick 100 00:06:00.995 tick 250 00:06:00.995 tick 100 00:06:00.995 tick 100 00:06:00.995 tick 100 00:06:00.995 tick 250 00:06:00.995 tick 500 00:06:00.995 tick 100 00:06:00.995 tick 100 00:06:00.995 tick 250 00:06:00.995 tick 100 00:06:00.995 tick 100 00:06:00.995 test_end 00:06:00.995 00:06:00.995 real 0m1.890s 00:06:00.995 user 0m1.655s 00:06:00.995 sys 0m0.122s 00:06:00.995 ************************************ 00:06:00.995 END TEST event_reactor 00:06:00.995 ************************************ 00:06:00.995 17:10:19 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.995 17:10:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:00.995 17:10:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:00.995 17:10:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.995 17:10:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:00.995 17:10:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.995 17:10:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.995 ************************************ 00:06:00.995 START TEST event_reactor_perf 00:06:00.995 ************************************ 00:06:00.995 17:10:19 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.995 [2024-07-22 17:10:19.931215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:00.995 [2024-07-22 17:10:19.931359] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:06:01.253 [2024-07-22 17:10:20.093295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.511 [2024-07-22 17:10:20.336214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.883 test_start 00:06:02.883 test_end 00:06:02.883 Performance: 283825 events per second 00:06:02.883 ************************************ 00:06:02.883 END TEST event_reactor_perf 00:06:02.883 ************************************ 00:06:02.883 00:06:02.883 real 0m1.863s 00:06:02.883 user 0m1.653s 00:06:02.883 sys 0m0.101s 00:06:02.883 17:10:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.883 17:10:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.883 17:10:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.883 17:10:21 event -- event/event.sh@49 -- # uname -s 00:06:02.883 17:10:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.883 17:10:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.883 17:10:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.883 17:10:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.883 17:10:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.883 ************************************ 00:06:02.883 START TEST event_scheduler 00:06:02.883 ************************************ 00:06:02.883 17:10:21 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.140 * Looking for test storage... 00:06:03.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:03.140 17:10:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.140 17:10:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60630 00:06:03.140 17:10:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.140 17:10:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.140 17:10:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60630 00:06:03.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60630 ']' 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.140 17:10:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.140 [2024-07-22 17:10:22.024701] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:03.140 [2024-07-22 17:10:22.025153] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60630 ] 00:06:03.398 [2024-07-22 17:10:22.194528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.655 [2024-07-22 17:10:22.504666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.655 [2024-07-22 17:10:22.504776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.655 [2024-07-22 17:10:22.504917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.655 [2024-07-22 17:10:22.505060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:04.221 17:10:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.221 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.221 POWER: Cannot set governor of lcore 0 to performance 00:06:04.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.221 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.221 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.221 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:04.221 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:04.221 POWER: Unable to set Power Management Environment for lcore 0 00:06:04.221 [2024-07-22 17:10:22.952452] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:04.221 [2024-07-22 17:10:22.952479] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:04.221 [2024-07-22 17:10:22.952498] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:04.221 [2024-07-22 17:10:22.952519] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.221 [2024-07-22 17:10:22.952536] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.221 [2024-07-22 17:10:22.952548] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.221 17:10:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.221 17:10:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 [2024-07-22 17:10:23.278859] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.482 17:10:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.482 17:10:23 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.482 17:10:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 ************************************ 00:06:04.482 START TEST scheduler_create_thread 00:06:04.482 ************************************ 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 2 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 3 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 4 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 5 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 6 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 7 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 8 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 9 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 10 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.482 17:10:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.381 17:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.381 17:10:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.381 17:10:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.381 17:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.381 17:10:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.317 ************************************ 00:06:07.317 END TEST scheduler_create_thread 00:06:07.317 ************************************ 00:06:07.317 17:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.317 00:06:07.317 real 0m2.622s 00:06:07.317 user 0m0.017s 00:06:07.317 sys 0m0.004s 00:06:07.317 17:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.317 17:10:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:07.317 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.317 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60630 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60630 ']' 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60630 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60630 00:06:07.317 killing process with pid 60630 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:07.317 17:10:25 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:07.318 17:10:25 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60630' 00:06:07.318 17:10:25 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60630 00:06:07.318 17:10:25 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60630 00:06:07.575 [2024-07-22 17:10:26.392254] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.949 00:06:08.949 real 0m5.817s 00:06:08.949 user 0m9.545s 00:06:08.949 sys 0m0.465s 00:06:08.949 ************************************ 00:06:08.949 END TEST event_scheduler 00:06:08.949 ************************************ 00:06:08.949 17:10:27 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.949 17:10:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.949 17:10:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.949 17:10:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.949 17:10:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.949 17:10:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.949 17:10:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.949 17:10:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.949 ************************************ 00:06:08.949 START TEST app_repeat 00:06:08.949 ************************************ 00:06:08.949 17:10:27 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:08.949 Process app_repeat pid: 60747 00:06:08.949 spdk_app_start Round 0 00:06:08.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60747 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60747' 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.949 17:10:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60747 /var/tmp/spdk-nbd.sock 00:06:08.949 17:10:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60747 ']' 00:06:08.949 17:10:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.950 17:10:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.950 17:10:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.950 17:10:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.950 17:10:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.950 [2024-07-22 17:10:27.759747] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:08.950 [2024-07-22 17:10:27.760042] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:06:09.208 [2024-07-22 17:10:27.934072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.466 [2024-07-22 17:10:28.184489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.466 [2024-07-22 17:10:28.184500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.724 17:10:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.724 17:10:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.724 17:10:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.289 Malloc0 00:06:10.289 17:10:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.547 Malloc1 00:06:10.547 17:10:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.547 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.548 17:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.548 /dev/nbd0 00:06:10.806 17:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.806 17:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.806 1+0 records in 00:06:10.806 1+0 records out 00:06:10.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345523 s, 11.9 MB/s 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.806 17:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.806 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.806 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.806 17:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.064 /dev/nbd1 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.064 1+0 records in 00:06:11.064 1+0 records out 00:06:11.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345454 s, 11.9 MB/s 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.064 17:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.064 17:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.322 { 00:06:11.322 "nbd_device": "/dev/nbd0", 00:06:11.322 "bdev_name": "Malloc0" 00:06:11.322 }, 00:06:11.322 { 00:06:11.322 "nbd_device": "/dev/nbd1", 00:06:11.322 "bdev_name": "Malloc1" 00:06:11.322 } 00:06:11.322 ]' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.322 { 00:06:11.322 "nbd_device": "/dev/nbd0", 00:06:11.322 "bdev_name": "Malloc0" 00:06:11.322 }, 00:06:11.322 { 00:06:11.322 "nbd_device": "/dev/nbd1", 00:06:11.322 "bdev_name": "Malloc1" 00:06:11.322 } 00:06:11.322 ]' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.322 /dev/nbd1' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.322 /dev/nbd1' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.322 256+0 records in 00:06:11.322 256+0 records out 00:06:11.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599792 s, 175 MB/s 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.322 256+0 records in 00:06:11.322 256+0 records out 00:06:11.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261709 s, 40.1 MB/s 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.322 256+0 records in 00:06:11.322 256+0 records out 00:06:11.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354586 s, 29.6 MB/s 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.322 17:10:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.580 17:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.838 17:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.096 17:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.354 17:10:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.354 17:10:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.921 17:10:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.294 [2024-07-22 17:10:32.938009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.294 [2024-07-22 17:10:33.177154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.294 [2024-07-22 17:10:33.177168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.557 [2024-07-22 17:10:33.371252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.557 [2024-07-22 17:10:33.371369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.932 spdk_app_start Round 1 00:06:15.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.932 17:10:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.932 17:10:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.932 17:10:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60747 /var/tmp/spdk-nbd.sock 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60747 ']' 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.932 17:10:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.190 17:10:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.190 17:10:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.190 17:10:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.448 Malloc0 00:06:16.448 17:10:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.705 Malloc1 00:06:16.706 17:10:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.706 17:10:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.270 /dev/nbd0 00:06:17.270 17:10:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.270 17:10:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.270 1+0 records in 00:06:17.270 1+0 records out 00:06:17.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293871 s, 13.9 MB/s 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.270 17:10:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.270 17:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.270 17:10:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.270 17:10:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.528 /dev/nbd1 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.528 1+0 records in 00:06:17.528 1+0 records out 00:06:17.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394681 s, 10.4 MB/s 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.528 17:10:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.528 17:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.786 { 00:06:17.786 "nbd_device": "/dev/nbd0", 00:06:17.786 "bdev_name": "Malloc0" 00:06:17.786 }, 00:06:17.786 { 00:06:17.786 "nbd_device": "/dev/nbd1", 00:06:17.786 "bdev_name": "Malloc1" 00:06:17.786 } 00:06:17.786 ]' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.786 { 00:06:17.786 "nbd_device": "/dev/nbd0", 00:06:17.786 "bdev_name": "Malloc0" 00:06:17.786 }, 00:06:17.786 { 00:06:17.786 "nbd_device": "/dev/nbd1", 00:06:17.786 "bdev_name": "Malloc1" 00:06:17.786 } 00:06:17.786 ]' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.786 /dev/nbd1' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.786 /dev/nbd1' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.786 256+0 records in 00:06:17.786 256+0 records out 00:06:17.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00912866 s, 115 MB/s 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.786 256+0 records in 00:06:17.786 256+0 records out 00:06:17.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259938 s, 40.3 MB/s 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.786 256+0 records in 00:06:17.786 256+0 records out 00:06:17.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354936 s, 29.5 MB/s 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.786 17:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.787 17:10:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.080 17:10:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.361 17:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.620 17:10:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.620 17:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.620 17:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.878 17:10:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.878 17:10:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.137 17:10:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.513 [2024-07-22 17:10:39.182261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.513 [2024-07-22 17:10:39.418671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.513 [2024-07-22 17:10:39.418677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.772 [2024-07-22 17:10:39.610746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.772 [2024-07-22 17:10:39.610874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.171 spdk_app_start Round 2 00:06:22.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.171 17:10:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.171 17:10:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.171 17:10:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60747 /var/tmp/spdk-nbd.sock 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60747 ']' 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.171 17:10:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.429 17:10:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.429 17:10:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:22.429 17:10:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.688 Malloc0 00:06:22.688 17:10:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.255 Malloc1 00:06:23.255 17:10:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.255 17:10:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.256 17:10:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.514 /dev/nbd0 00:06:23.514 17:10:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.514 17:10:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.514 1+0 records in 00:06:23.514 1+0 records out 00:06:23.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262791 s, 15.6 MB/s 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.514 17:10:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.514 17:10:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.514 17:10:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.514 17:10:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.773 /dev/nbd1 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.773 1+0 records in 00:06:23.773 1+0 records out 00:06:23.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326597 s, 12.5 MB/s 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.773 17:10:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.773 17:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.032 { 00:06:24.032 "nbd_device": "/dev/nbd0", 00:06:24.032 "bdev_name": "Malloc0" 00:06:24.032 }, 00:06:24.032 { 00:06:24.032 "nbd_device": "/dev/nbd1", 00:06:24.032 "bdev_name": "Malloc1" 00:06:24.032 } 00:06:24.032 ]' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.032 { 00:06:24.032 "nbd_device": "/dev/nbd0", 00:06:24.032 "bdev_name": "Malloc0" 00:06:24.032 }, 00:06:24.032 { 00:06:24.032 "nbd_device": "/dev/nbd1", 00:06:24.032 "bdev_name": "Malloc1" 00:06:24.032 } 00:06:24.032 ]' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.032 /dev/nbd1' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.032 /dev/nbd1' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.032 256+0 records in 00:06:24.032 256+0 records out 00:06:24.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00813051 s, 129 MB/s 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.032 17:10:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.291 256+0 records in 00:06:24.291 256+0 records out 00:06:24.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271148 s, 38.7 MB/s 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.291 256+0 records in 00:06:24.291 256+0 records out 00:06:24.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381832 s, 27.5 MB/s 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.291 17:10:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.548 17:10:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.806 17:10:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.065 17:10:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.065 17:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.065 17:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.065 17:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.065 17:10:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.065 17:10:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.631 17:10:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.007 [2024-07-22 17:10:45.696870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.007 [2024-07-22 17:10:45.936292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.007 [2024-07-22 17:10:45.936317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.265 [2024-07-22 17:10:46.131151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.265 [2024-07-22 17:10:46.131296] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.638 17:10:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60747 /var/tmp/spdk-nbd.sock 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60747 ']' 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.638 17:10:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:28.896 17:10:47 event.app_repeat -- event/event.sh@39 -- # killprocess 60747 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60747 ']' 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60747 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60747 00:06:28.896 killing process with pid 60747 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60747' 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60747 00:06:28.896 17:10:47 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60747 00:06:30.270 spdk_app_start is called in Round 0. 00:06:30.270 Shutdown signal received, stop current app iteration 00:06:30.270 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:30.270 spdk_app_start is called in Round 1. 00:06:30.270 Shutdown signal received, stop current app iteration 00:06:30.270 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:30.270 spdk_app_start is called in Round 2. 00:06:30.270 Shutdown signal received, stop current app iteration 00:06:30.270 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:30.270 spdk_app_start is called in Round 3. 00:06:30.270 Shutdown signal received, stop current app iteration 00:06:30.270 17:10:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.270 17:10:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:30.270 00:06:30.270 real 0m21.194s 00:06:30.270 user 0m45.204s 00:06:30.270 sys 0m3.159s 00:06:30.270 17:10:48 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.270 ************************************ 00:06:30.270 END TEST app_repeat 00:06:30.270 17:10:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.270 ************************************ 00:06:30.270 17:10:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:30.270 17:10:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.270 17:10:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.270 17:10:48 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.270 17:10:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.270 17:10:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.270 ************************************ 00:06:30.270 START TEST cpu_locks 00:06:30.270 ************************************ 00:06:30.270 17:10:48 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.270 * Looking for test storage... 00:06:30.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:30.270 17:10:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:30.270 17:10:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:30.270 17:10:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:30.270 17:10:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:30.270 17:10:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.270 17:10:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.270 17:10:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.270 ************************************ 00:06:30.270 START TEST default_locks 00:06:30.270 ************************************ 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61205 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61205 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61205 ']' 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.270 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.270 [2024-07-22 17:10:49.193122] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:30.270 [2024-07-22 17:10:49.193395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61205 ] 00:06:30.560 [2024-07-22 17:10:49.366805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.822 [2024-07-22 17:10:49.625253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.758 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.758 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:31.758 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61205 00:06:31.758 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61205 00:06:31.758 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61205 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61205 ']' 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61205 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61205 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.015 killing process with pid 61205 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61205' 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61205 00:06:32.015 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61205 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61205 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61205 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61205 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61205 ']' 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61205) - No such process 00:06:34.554 ERROR: process (pid: 61205) is no longer running 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.554 00:06:34.554 real 0m4.137s 00:06:34.554 user 0m4.136s 00:06:34.554 sys 0m0.730s 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.554 17:10:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 ************************************ 00:06:34.554 END TEST default_locks 00:06:34.554 ************************************ 00:06:34.554 17:10:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.554 17:10:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.554 17:10:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.554 17:10:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.554 17:10:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 ************************************ 00:06:34.554 START TEST default_locks_via_rpc 00:06:34.554 ************************************ 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61280 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61280 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61280 ']' 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.554 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.554 [2024-07-22 17:10:53.374655] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:34.554 [2024-07-22 17:10:53.374924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61280 ] 00:06:34.818 [2024-07-22 17:10:53.542974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.076 [2024-07-22 17:10:53.795896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61280 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.011 17:10:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61280 00:06:36.269 17:10:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61280 00:06:36.269 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61280 ']' 00:06:36.269 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61280 00:06:36.269 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:36.269 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61280 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.270 killing process with pid 61280 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61280' 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61280 00:06:36.270 17:10:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61280 00:06:38.801 00:06:38.801 real 0m4.139s 00:06:38.801 user 0m4.111s 00:06:38.801 sys 0m0.735s 00:06:38.801 17:10:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.801 ************************************ 00:06:38.801 END TEST default_locks_via_rpc 00:06:38.801 ************************************ 00:06:38.801 17:10:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.801 17:10:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.801 17:10:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.801 17:10:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.801 17:10:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.801 17:10:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.801 ************************************ 00:06:38.801 START TEST non_locking_app_on_locked_coremask 00:06:38.801 ************************************ 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61356 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61356 /var/tmp/spdk.sock 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61356 ']' 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.801 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.801 [2024-07-22 17:10:57.568471] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:38.801 [2024-07-22 17:10:57.568691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:06:38.801 [2024-07-22 17:10:57.734170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.059 [2024-07-22 17:10:57.994200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61378 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61378 /var/tmp/spdk2.sock 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61378 ']' 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.993 17:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.252 [2024-07-22 17:10:58.979777] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:40.252 [2024-07-22 17:10:58.980466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61378 ] 00:06:40.252 [2024-07-22 17:10:59.167261] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.252 [2024-07-22 17:10:59.167344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.817 [2024-07-22 17:10:59.681540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.345 17:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.345 17:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:43.345 17:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61356 00:06:43.345 17:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61356 00:06:43.345 17:11:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61356 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61356 ']' 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61356 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61356 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.910 killing process with pid 61356 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61356' 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61356 00:06:43.910 17:11:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61356 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61378 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61378 ']' 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61378 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61378 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.221 killing process with pid 61378 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61378' 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61378 00:06:49.221 17:11:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61378 00:06:50.598 00:06:50.598 real 0m12.136s 00:06:50.598 user 0m12.617s 00:06:50.598 sys 0m1.554s 00:06:50.598 17:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.598 17:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.598 ************************************ 00:06:50.598 END TEST non_locking_app_on_locked_coremask 00:06:50.598 ************************************ 00:06:50.858 17:11:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:50.858 17:11:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:50.858 17:11:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.858 17:11:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.858 17:11:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.858 ************************************ 00:06:50.858 START TEST locking_app_on_unlocked_coremask 00:06:50.858 ************************************ 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61532 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61532 /var/tmp/spdk.sock 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61532 ']' 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.858 17:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.119 [2024-07-22 17:11:09.958862] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:51.119 [2024-07-22 17:11:09.959319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:06:51.378 [2024-07-22 17:11:10.166316] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.378 [2024-07-22 17:11:10.166650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.636 [2024-07-22 17:11:10.524314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61554 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61554 /var/tmp/spdk2.sock 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61554 ']' 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.595 17:11:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.595 [2024-07-22 17:11:11.528585] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:52.595 [2024-07-22 17:11:11.528755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61554 ] 00:06:52.853 [2024-07-22 17:11:11.701816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.420 [2024-07-22 17:11:12.217549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.318 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.318 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:55.318 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61554 00:06:55.318 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61554 00:06:55.318 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61532 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61532 ']' 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61532 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61532 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61532' 00:06:56.254 killing process with pid 61532 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61532 00:06:56.254 17:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61532 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61554 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61554 ']' 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61554 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61554 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.523 killing process with pid 61554 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61554' 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61554 00:07:01.523 17:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61554 00:07:02.898 00:07:02.898 real 0m12.198s 00:07:02.898 user 0m12.629s 00:07:02.898 sys 0m1.664s 00:07:02.898 17:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.898 17:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.898 ************************************ 00:07:02.898 END TEST locking_app_on_unlocked_coremask 00:07:02.898 ************************************ 00:07:03.156 17:11:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.156 17:11:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.157 17:11:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.157 17:11:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.157 17:11:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 ************************************ 00:07:03.157 START TEST locking_app_on_locked_coremask 00:07:03.157 ************************************ 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61703 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61703 /var/tmp/spdk.sock 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61703 ']' 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.157 17:11:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.157 [2024-07-22 17:11:22.055922] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:03.157 [2024-07-22 17:11:22.056182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61703 ] 00:07:03.415 [2024-07-22 17:11:22.232895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.674 [2024-07-22 17:11:22.515613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61730 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61730 /var/tmp/spdk2.sock 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61730 /var/tmp/spdk2.sock 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61730 /var/tmp/spdk2.sock 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61730 ']' 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.626 17:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.626 [2024-07-22 17:11:23.541444] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:04.626 [2024-07-22 17:11:23.541650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61730 ] 00:07:04.885 [2024-07-22 17:11:23.723198] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61703 has claimed it. 00:07:04.885 [2024-07-22 17:11:23.723311] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.451 ERROR: process (pid: 61730) is no longer running 00:07:05.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61730) - No such process 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61703 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61703 00:07:05.451 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61703 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61703 ']' 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61703 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61703 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.710 killing process with pid 61703 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61703' 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61703 00:07:05.710 17:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61703 00:07:08.240 00:07:08.240 real 0m4.954s 00:07:08.240 user 0m5.266s 00:07:08.240 sys 0m0.897s 00:07:08.240 17:11:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.240 17:11:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.240 ************************************ 00:07:08.240 END TEST locking_app_on_locked_coremask 00:07:08.240 ************************************ 00:07:08.240 17:11:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.240 17:11:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:08.240 17:11:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.240 17:11:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.240 17:11:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.240 ************************************ 00:07:08.240 START TEST locking_overlapped_coremask 00:07:08.240 ************************************ 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61794 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61794 /var/tmp/spdk.sock 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61794 ']' 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.240 17:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.240 [2024-07-22 17:11:27.056624] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:08.240 [2024-07-22 17:11:27.056843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 00:07:08.498 [2024-07-22 17:11:27.239266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.756 [2024-07-22 17:11:27.530578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.756 [2024-07-22 17:11:27.530695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.756 [2024-07-22 17:11:27.530703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61818 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61818 /var/tmp/spdk2.sock 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61818 /var/tmp/spdk2.sock 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61818 /var/tmp/spdk2.sock 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61818 ']' 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.688 17:11:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.688 [2024-07-22 17:11:28.503370] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:09.688 [2024-07-22 17:11:28.503604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61818 ] 00:07:09.946 [2024-07-22 17:11:28.681796] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61794 has claimed it. 00:07:09.946 [2024-07-22 17:11:28.681896] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.510 ERROR: process (pid: 61818) is no longer running 00:07:10.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61818) - No such process 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61794 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61794 ']' 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61794 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61794 00:07:10.510 killing process with pid 61794 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61794' 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61794 00:07:10.510 17:11:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61794 00:07:13.041 00:07:13.041 real 0m4.720s 00:07:13.041 user 0m12.198s 00:07:13.041 sys 0m0.704s 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.041 ************************************ 00:07:13.041 END TEST locking_overlapped_coremask 00:07:13.041 ************************************ 00:07:13.041 17:11:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.041 17:11:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.041 17:11:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.041 17:11:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.041 17:11:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.041 ************************************ 00:07:13.041 START TEST locking_overlapped_coremask_via_rpc 00:07:13.041 ************************************ 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61887 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61887 /var/tmp/spdk.sock 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61887 ']' 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.041 17:11:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.041 [2024-07-22 17:11:31.835096] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:13.041 [2024-07-22 17:11:31.835305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61887 ] 00:07:13.300 [2024-07-22 17:11:32.012128] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.300 [2024-07-22 17:11:32.012210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.559 [2024-07-22 17:11:32.277189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.559 [2024-07-22 17:11:32.277323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.559 [2024-07-22 17:11:32.277339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61905 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61905 /var/tmp/spdk2.sock 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61905 ']' 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.507 17:11:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.507 [2024-07-22 17:11:33.239647] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:14.507 [2024-07-22 17:11:33.240150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:07:14.507 [2024-07-22 17:11:33.421930] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.507 [2024-07-22 17:11:33.422016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.081 [2024-07-22 17:11:33.947897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.081 [2024-07-22 17:11:33.948019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.081 [2024-07-22 17:11:33.948040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.983 [2024-07-22 17:11:35.904176] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61887 has claimed it. 00:07:16.983 request: 00:07:16.983 { 00:07:16.983 "method": "framework_enable_cpumask_locks", 00:07:16.983 "req_id": 1 00:07:16.983 } 00:07:16.983 Got JSON-RPC error response 00:07:16.983 response: 00:07:16.983 { 00:07:16.983 "code": -32603, 00:07:16.983 "message": "Failed to claim CPU core: 2" 00:07:16.983 } 00:07:16.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61887 /var/tmp/spdk.sock 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61887 ']' 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.983 17:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61905 /var/tmp/spdk2.sock 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61905 ']' 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.242 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.500 ************************************ 00:07:17.500 END TEST locking_overlapped_coremask_via_rpc 00:07:17.500 ************************************ 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.500 00:07:17.500 real 0m4.759s 00:07:17.500 user 0m1.612s 00:07:17.500 sys 0m0.245s 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.500 17:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.758 17:11:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:17.758 17:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61887 ]] 00:07:17.758 17:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61887 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61887 ']' 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61887 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61887 00:07:17.758 killing process with pid 61887 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61887' 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61887 00:07:17.758 17:11:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61887 00:07:20.329 17:11:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61905 ]] 00:07:20.329 17:11:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61905 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61905 ']' 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61905 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61905 00:07:20.329 killing process with pid 61905 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61905' 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61905 00:07:20.329 17:11:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61905 00:07:22.231 17:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.231 Process with pid 61887 is not found 00:07:22.231 Process with pid 61905 is not found 00:07:22.231 17:11:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.232 17:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61887 ]] 00:07:22.232 17:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61887 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61887 ']' 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61887 00:07:22.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61887) - No such process 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61887 is not found' 00:07:22.232 17:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61905 ]] 00:07:22.232 17:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61905 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61905 ']' 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61905 00:07:22.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61905) - No such process 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61905 is not found' 00:07:22.232 17:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.232 ************************************ 00:07:22.232 END TEST cpu_locks 00:07:22.232 ************************************ 00:07:22.232 00:07:22.232 real 0m52.151s 00:07:22.232 user 1m26.963s 00:07:22.232 sys 0m7.767s 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.232 17:11:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 17:11:41 event -- common/autotest_common.sh@1142 -- # return 0 00:07:22.232 00:07:22.232 real 1m25.263s 00:07:22.232 user 2m29.809s 00:07:22.232 sys 0m12.006s 00:07:22.232 ************************************ 00:07:22.232 END TEST event 00:07:22.232 ************************************ 00:07:22.232 17:11:41 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.232 17:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 17:11:41 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.232 17:11:41 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.232 17:11:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.232 17:11:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.232 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 ************************************ 00:07:22.232 START TEST thread 00:07:22.232 ************************************ 00:07:22.232 17:11:41 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.489 * Looking for test storage... 00:07:22.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:22.489 17:11:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.489 17:11:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.489 17:11:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.489 17:11:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.489 ************************************ 00:07:22.489 START TEST thread_poller_perf 00:07:22.489 ************************************ 00:07:22.489 17:11:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.489 [2024-07-22 17:11:41.292992] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:22.489 [2024-07-22 17:11:41.293135] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62092 ] 00:07:22.746 [2024-07-22 17:11:41.457971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.004 [2024-07-22 17:11:41.709153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.004 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.378 ====================================== 00:07:24.378 busy:2208836998 (cyc) 00:07:24.378 total_run_count: 293000 00:07:24.378 tsc_hz: 2200000000 (cyc) 00:07:24.378 ====================================== 00:07:24.378 poller_cost: 7538 (cyc), 3426 (nsec) 00:07:24.378 00:07:24.378 real 0m1.883s 00:07:24.378 user 0m1.652s 00:07:24.378 sys 0m0.117s 00:07:24.378 17:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.378 17:11:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.378 ************************************ 00:07:24.378 END TEST thread_poller_perf 00:07:24.378 ************************************ 00:07:24.378 17:11:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:24.378 17:11:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.378 17:11:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:24.378 17:11:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.378 17:11:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.378 ************************************ 00:07:24.378 START TEST thread_poller_perf 00:07:24.378 ************************************ 00:07:24.378 17:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.378 [2024-07-22 17:11:43.233875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:24.378 [2024-07-22 17:11:43.234060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62133 ] 00:07:24.635 [2024-07-22 17:11:43.407190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.892 [2024-07-22 17:11:43.676996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.892 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:26.265 ====================================== 00:07:26.265 busy:2204262739 (cyc) 00:07:26.265 total_run_count: 3713000 00:07:26.265 tsc_hz: 2200000000 (cyc) 00:07:26.265 ====================================== 00:07:26.265 poller_cost: 593 (cyc), 269 (nsec) 00:07:26.265 ************************************ 00:07:26.265 END TEST thread_poller_perf 00:07:26.265 ************************************ 00:07:26.265 00:07:26.265 real 0m1.898s 00:07:26.265 user 0m1.671s 00:07:26.265 sys 0m0.114s 00:07:26.265 17:11:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.265 17:11:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.265 17:11:45 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:26.265 17:11:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.265 00:07:26.265 real 0m3.961s 00:07:26.265 user 0m3.393s 00:07:26.265 sys 0m0.328s 00:07:26.265 17:11:45 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.265 17:11:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.265 ************************************ 00:07:26.265 END TEST thread 00:07:26.265 ************************************ 00:07:26.265 17:11:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.265 17:11:45 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:26.265 17:11:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.265 17:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.265 17:11:45 -- common/autotest_common.sh@10 -- # set +x 00:07:26.265 ************************************ 00:07:26.265 START TEST accel 00:07:26.265 ************************************ 00:07:26.265 17:11:45 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:26.524 * Looking for test storage... 00:07:26.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:26.524 17:11:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:26.524 17:11:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:26.524 17:11:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.524 17:11:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62210 00:07:26.524 17:11:45 accel -- accel/accel.sh@63 -- # waitforlisten 62210 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@829 -- # '[' -z 62210 ']' 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.524 17:11:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.524 17:11:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:26.524 17:11:45 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:26.524 17:11:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.524 17:11:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.524 17:11:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.524 17:11:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.524 17:11:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.524 17:11:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:26.524 17:11:45 accel -- accel/accel.sh@41 -- # jq -r . 00:07:26.524 [2024-07-22 17:11:45.426975] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:26.524 [2024-07-22 17:11:45.427179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:07:26.782 [2024-07-22 17:11:45.602344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.040 [2024-07-22 17:11:45.884905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@862 -- # return 0 00:07:27.976 17:11:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:27.976 17:11:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:27.976 17:11:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:27.976 17:11:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:27.976 17:11:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:27.976 17:11:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.976 17:11:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.976 17:11:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.976 17:11:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.976 17:11:46 accel -- accel/accel.sh@75 -- # killprocess 62210 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@948 -- # '[' -z 62210 ']' 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@952 -- # kill -0 62210 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@953 -- # uname 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62210 00:07:27.976 killing process with pid 62210 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62210' 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@967 -- # kill 62210 00:07:27.976 17:11:46 accel -- common/autotest_common.sh@972 -- # wait 62210 00:07:30.558 17:11:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:30.558 17:11:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.558 17:11:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:30.558 17:11:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:30.558 17:11:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.558 17:11:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.558 17:11:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.558 17:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.558 ************************************ 00:07:30.558 START TEST accel_missing_filename 00:07:30.558 ************************************ 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.558 17:11:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:30.558 17:11:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:30.558 [2024-07-22 17:11:49.306824] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:30.558 [2024-07-22 17:11:49.307022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62291 ] 00:07:30.558 [2024-07-22 17:11:49.483136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.124 [2024-07-22 17:11:49.773198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.124 [2024-07-22 17:11:49.980249] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.690 [2024-07-22 17:11:50.491578] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:32.256 A filename is required. 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.256 00:07:32.256 real 0m1.664s 00:07:32.256 user 0m1.399s 00:07:32.256 sys 0m0.205s 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.256 ************************************ 00:07:32.256 17:11:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:32.256 END TEST accel_missing_filename 00:07:32.256 ************************************ 00:07:32.256 17:11:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.256 17:11:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.256 17:11:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:32.257 17:11:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.257 17:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.257 ************************************ 00:07:32.257 START TEST accel_compress_verify 00:07:32.257 ************************************ 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.257 17:11:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:32.257 17:11:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:32.257 [2024-07-22 17:11:51.022361] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.257 [2024-07-22 17:11:51.022555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62333 ] 00:07:32.257 [2024-07-22 17:11:51.191449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.823 [2024-07-22 17:11:51.488122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.823 [2024-07-22 17:11:51.717181] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.390 [2024-07-22 17:11:52.268516] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:33.956 00:07:33.956 Compression does not support the verify option, aborting. 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:33.956 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.956 00:07:33.956 real 0m1.756s 00:07:33.956 user 0m1.488s 00:07:33.956 sys 0m0.207s 00:07:33.957 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.957 ************************************ 00:07:33.957 END TEST accel_compress_verify 00:07:33.957 ************************************ 00:07:33.957 17:11:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.957 17:11:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.957 ************************************ 00:07:33.957 START TEST accel_wrong_workload 00:07:33.957 ************************************ 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:33.957 17:11:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:33.957 Unsupported workload type: foobar 00:07:33.957 [2024-07-22 17:11:52.825400] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:33.957 accel_perf options: 00:07:33.957 [-h help message] 00:07:33.957 [-q queue depth per core] 00:07:33.957 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:33.957 [-T number of threads per core 00:07:33.957 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:33.957 [-t time in seconds] 00:07:33.957 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:33.957 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:33.957 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:33.957 [-l for compress/decompress workloads, name of uncompressed input file 00:07:33.957 [-S for crc32c workload, use this seed value (default 0) 00:07:33.957 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:33.957 [-f for fill workload, use this BYTE value (default 255) 00:07:33.957 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:33.957 [-y verify result if this switch is on] 00:07:33.957 [-a tasks to allocate per core (default: same value as -q)] 00:07:33.957 Can be used to spread operations across a wider range of memory. 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.957 00:07:33.957 real 0m0.077s 00:07:33.957 user 0m0.082s 00:07:33.957 sys 0m0.044s 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.957 17:11:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:33.957 ************************************ 00:07:33.957 END TEST accel_wrong_workload 00:07:33.957 ************************************ 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.957 17:11:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.957 17:11:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.957 ************************************ 00:07:33.957 START TEST accel_negative_buffers 00:07:33.957 ************************************ 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.957 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:33.957 17:11:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:34.216 -x option must be non-negative. 00:07:34.216 [2024-07-22 17:11:52.950578] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:34.216 accel_perf options: 00:07:34.216 [-h help message] 00:07:34.216 [-q queue depth per core] 00:07:34.216 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:34.216 [-T number of threads per core 00:07:34.216 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:34.216 [-t time in seconds] 00:07:34.216 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:34.216 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:34.216 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:34.216 [-l for compress/decompress workloads, name of uncompressed input file 00:07:34.216 [-S for crc32c workload, use this seed value (default 0) 00:07:34.216 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:34.216 [-f for fill workload, use this BYTE value (default 255) 00:07:34.216 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:34.216 [-y verify result if this switch is on] 00:07:34.216 [-a tasks to allocate per core (default: same value as -q)] 00:07:34.216 Can be used to spread operations across a wider range of memory. 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.216 00:07:34.216 real 0m0.084s 00:07:34.216 user 0m0.089s 00:07:34.216 sys 0m0.038s 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.216 17:11:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 ************************************ 00:07:34.216 END TEST accel_negative_buffers 00:07:34.216 ************************************ 00:07:34.216 17:11:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.216 17:11:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:34.216 17:11:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:34.216 17:11:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.216 17:11:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 ************************************ 00:07:34.216 START TEST accel_crc32c 00:07:34.216 ************************************ 00:07:34.216 17:11:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:34.216 17:11:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:34.216 [2024-07-22 17:11:53.083928] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:34.216 [2024-07-22 17:11:53.084176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62411 ] 00:07:34.475 [2024-07-22 17:11:53.260789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.733 [2024-07-22 17:11:53.542385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.991 17:11:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:36.912 17:11:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.912 00:07:36.912 real 0m2.628s 00:07:36.912 user 0m2.343s 00:07:36.912 sys 0m0.189s 00:07:36.912 17:11:55 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.912 17:11:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:36.912 ************************************ 00:07:36.912 END TEST accel_crc32c 00:07:36.912 ************************************ 00:07:36.912 17:11:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.912 17:11:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:36.912 17:11:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:36.912 17:11:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.912 17:11:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.912 ************************************ 00:07:36.912 START TEST accel_crc32c_C2 00:07:36.912 ************************************ 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.912 17:11:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:36.912 [2024-07-22 17:11:55.766825] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:36.913 [2024-07-22 17:11:55.767019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62452 ] 00:07:37.171 [2024-07-22 17:11:55.944128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.429 [2024-07-22 17:11:56.198061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.688 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.689 17:11:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.601 00:07:39.601 real 0m2.652s 00:07:39.601 user 0m2.352s 00:07:39.601 sys 0m0.200s 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.601 17:11:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:39.601 ************************************ 00:07:39.601 END TEST accel_crc32c_C2 00:07:39.601 ************************************ 00:07:39.601 17:11:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.601 17:11:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:39.601 17:11:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.601 17:11:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.601 17:11:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.601 ************************************ 00:07:39.601 START TEST accel_copy 00:07:39.601 ************************************ 00:07:39.601 17:11:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.601 17:11:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.602 17:11:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.602 17:11:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.602 17:11:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:39.602 17:11:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:39.602 [2024-07-22 17:11:58.457087] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:39.602 [2024-07-22 17:11:58.457234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62504 ] 00:07:39.859 [2024-07-22 17:11:58.623480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.118 [2024-07-22 17:11:58.875115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.377 17:11:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:42.275 17:12:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.275 00:07:42.275 real 0m2.590s 00:07:42.275 user 0m0.007s 00:07:42.275 sys 0m0.008s 00:07:42.275 17:12:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.275 17:12:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.275 ************************************ 00:07:42.275 END TEST accel_copy 00:07:42.275 ************************************ 00:07:42.275 17:12:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.275 17:12:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.275 17:12:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:42.275 17:12:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.275 17:12:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.275 ************************************ 00:07:42.275 START TEST accel_fill 00:07:42.275 ************************************ 00:07:42.275 17:12:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:42.276 17:12:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:42.276 [2024-07-22 17:12:01.091509] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:42.276 [2024-07-22 17:12:01.091655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62551 ] 00:07:42.533 [2024-07-22 17:12:01.253018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.791 [2024-07-22 17:12:01.511922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.791 17:12:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.692 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:44.693 ************************************ 00:07:44.693 END TEST accel_fill 00:07:44.693 ************************************ 00:07:44.693 17:12:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.693 00:07:44.693 real 0m2.574s 00:07:44.693 user 0m2.283s 00:07:44.693 sys 0m0.195s 00:07:44.693 17:12:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.693 17:12:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:44.950 17:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.950 17:12:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:44.950 17:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.950 17:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.950 17:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.950 ************************************ 00:07:44.950 START TEST accel_copy_crc32c 00:07:44.950 ************************************ 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:44.950 17:12:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:44.950 [2024-07-22 17:12:03.724374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:44.950 [2024-07-22 17:12:03.724542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62597 ] 00:07:45.209 [2024-07-22 17:12:03.900227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.467 [2024-07-22 17:12:04.186264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.467 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.468 17:12:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.370 00:07:47.370 real 0m2.626s 00:07:47.370 user 0m2.350s 00:07:47.370 sys 0m0.180s 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.370 17:12:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:47.370 ************************************ 00:07:47.370 END TEST accel_copy_crc32c 00:07:47.370 ************************************ 00:07:47.628 17:12:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.628 17:12:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.628 17:12:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:47.628 17:12:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.628 17:12:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.628 ************************************ 00:07:47.628 START TEST accel_copy_crc32c_C2 00:07:47.628 ************************************ 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.628 17:12:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:47.628 [2024-07-22 17:12:06.405601] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:47.628 [2024-07-22 17:12:06.405775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62644 ] 00:07:47.887 [2024-07-22 17:12:06.582160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.145 [2024-07-22 17:12:06.870028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.145 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.146 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.404 17:12:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:50.307 ************************************ 00:07:50.307 END TEST accel_copy_crc32c_C2 00:07:50.307 ************************************ 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.307 00:07:50.307 real 0m2.646s 00:07:50.307 user 0m2.341s 00:07:50.307 sys 0m0.206s 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.307 17:12:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:50.307 17:12:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.307 17:12:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:50.307 17:12:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:50.307 17:12:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.307 17:12:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.307 ************************************ 00:07:50.307 START TEST accel_dualcast 00:07:50.307 ************************************ 00:07:50.307 17:12:09 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:50.307 17:12:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:50.307 [2024-07-22 17:12:09.103631] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:50.307 [2024-07-22 17:12:09.103854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62690 ] 00:07:50.565 [2024-07-22 17:12:09.283279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.842 [2024-07-22 17:12:09.568146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.102 17:12:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:53.050 17:12:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.050 00:07:53.050 real 0m2.654s 00:07:53.050 user 0m2.345s 00:07:53.050 sys 0m0.207s 00:07:53.050 ************************************ 00:07:53.050 END TEST accel_dualcast 00:07:53.050 ************************************ 00:07:53.050 17:12:11 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.050 17:12:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:53.050 17:12:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.050 17:12:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:53.050 17:12:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:53.050 17:12:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.050 17:12:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.050 ************************************ 00:07:53.050 START TEST accel_compare 00:07:53.050 ************************************ 00:07:53.050 17:12:11 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:53.050 17:12:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:53.050 [2024-07-22 17:12:11.796483] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:53.050 [2024-07-22 17:12:11.796657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:07:53.050 [2024-07-22 17:12:11.975508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.629 [2024-07-22 17:12:12.263675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:53.629 17:12:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:55.530 17:12:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.530 00:07:55.530 real 0m2.648s 00:07:55.530 user 0m2.343s 00:07:55.530 sys 0m0.208s 00:07:55.530 17:12:14 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.530 ************************************ 00:07:55.530 END TEST accel_compare 00:07:55.530 17:12:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 17:12:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.530 17:12:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:55.530 17:12:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:55.530 17:12:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.530 17:12:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 START TEST accel_xor 00:07:55.530 ************************************ 00:07:55.530 17:12:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:55.530 17:12:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:55.802 [2024-07-22 17:12:14.504927] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:55.802 [2024-07-22 17:12:14.505133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62789 ] 00:07:55.802 [2024-07-22 17:12:14.685889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.060 [2024-07-22 17:12:14.966275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.319 17:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.218 00:07:58.218 real 0m2.673s 00:07:58.218 user 0m0.009s 00:07:58.218 sys 0m0.004s 00:07:58.218 17:12:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.218 ************************************ 00:07:58.218 END TEST accel_xor 00:07:58.218 ************************************ 00:07:58.218 17:12:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 17:12:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.218 17:12:17 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:58.218 17:12:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:58.218 17:12:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.218 17:12:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 ************************************ 00:07:58.218 START TEST accel_xor 00:07:58.218 ************************************ 00:07:58.218 17:12:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:58.218 17:12:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:58.476 [2024-07-22 17:12:17.205111] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:58.476 [2024-07-22 17:12:17.205264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62830 ] 00:07:58.476 [2024-07-22 17:12:17.369619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.734 [2024-07-22 17:12:17.618002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.992 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.993 17:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:00.892 17:12:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.892 00:08:00.892 real 0m2.574s 00:08:00.892 user 0m2.285s 00:08:00.892 sys 0m0.194s 00:08:00.892 ************************************ 00:08:00.892 END TEST accel_xor 00:08:00.892 ************************************ 00:08:00.892 17:12:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.892 17:12:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:00.892 17:12:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.892 17:12:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:00.892 17:12:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:00.892 17:12:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.892 17:12:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.892 ************************************ 00:08:00.892 START TEST accel_dif_verify 00:08:00.892 ************************************ 00:08:00.892 17:12:19 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.892 17:12:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:00.893 17:12:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:00.893 [2024-07-22 17:12:19.833693] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:00.893 [2024-07-22 17:12:19.833835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62882 ] 00:08:01.151 [2024-07-22 17:12:20.008209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.409 [2024-07-22 17:12:20.247995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.667 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:01.668 17:12:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:03.570 17:12:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.570 ************************************ 00:08:03.570 END TEST accel_dif_verify 00:08:03.570 ************************************ 00:08:03.570 00:08:03.570 real 0m2.568s 00:08:03.570 user 0m2.264s 00:08:03.570 sys 0m0.207s 00:08:03.570 17:12:22 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.570 17:12:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:03.570 17:12:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.570 17:12:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:03.570 17:12:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:03.570 17:12:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.570 17:12:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.570 ************************************ 00:08:03.570 START TEST accel_dif_generate 00:08:03.570 ************************************ 00:08:03.570 17:12:22 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:03.570 17:12:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:03.570 [2024-07-22 17:12:22.453238] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:03.570 [2024-07-22 17:12:22.453464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62929 ] 00:08:03.828 [2024-07-22 17:12:22.630780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.087 [2024-07-22 17:12:22.933634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.345 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.346 17:12:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:06.243 17:12:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.243 00:08:06.243 real 0m2.653s 00:08:06.243 user 0m0.017s 00:08:06.243 sys 0m0.002s 00:08:06.243 17:12:25 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.243 17:12:25 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:06.243 ************************************ 00:08:06.243 END TEST accel_dif_generate 00:08:06.243 ************************************ 00:08:06.243 17:12:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.243 17:12:25 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:06.243 17:12:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:06.243 17:12:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.243 17:12:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.243 ************************************ 00:08:06.243 START TEST accel_dif_generate_copy 00:08:06.243 ************************************ 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:06.243 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:06.244 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:06.244 [2024-07-22 17:12:25.160727] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:06.244 [2024-07-22 17:12:25.160890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:08:06.502 [2024-07-22 17:12:25.335130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.760 [2024-07-22 17:12:25.643215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.019 17:12:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.921 00:08:08.921 real 0m2.664s 00:08:08.921 user 0m2.358s 00:08:08.921 sys 0m0.210s 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.921 17:12:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:08.921 ************************************ 00:08:08.921 END TEST accel_dif_generate_copy 00:08:08.921 ************************************ 00:08:08.921 17:12:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.921 17:12:27 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:08.921 17:12:27 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.921 17:12:27 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:08.921 17:12:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.921 17:12:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.921 ************************************ 00:08:08.921 START TEST accel_comp 00:08:08.921 ************************************ 00:08:08.921 17:12:27 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:08.921 17:12:27 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:08.922 17:12:27 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:09.180 [2024-07-22 17:12:27.875138] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.180 [2024-07-22 17:12:27.875329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63026 ] 00:08:09.180 [2024-07-22 17:12:28.053012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.438 [2024-07-22 17:12:28.346573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.697 17:12:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:11.598 17:12:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.598 00:08:11.598 real 0m2.645s 00:08:11.598 user 0m2.341s 00:08:11.598 sys 0m0.206s 00:08:11.598 17:12:30 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.598 17:12:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:11.598 ************************************ 00:08:11.598 END TEST accel_comp 00:08:11.598 ************************************ 00:08:11.599 17:12:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.599 17:12:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.599 17:12:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:11.599 17:12:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.599 17:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 ************************************ 00:08:11.599 START TEST accel_decomp 00:08:11.599 ************************************ 00:08:11.599 17:12:30 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:11.599 17:12:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:11.858 [2024-07-22 17:12:30.567630] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.858 [2024-07-22 17:12:30.567787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63068 ] 00:08:11.858 [2024-07-22 17:12:30.738277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.117 [2024-07-22 17:12:30.970315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.376 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:12.377 17:12:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.307 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:14.308 17:12:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.308 00:08:14.308 real 0m2.582s 00:08:14.308 user 0m2.270s 00:08:14.308 sys 0m0.213s 00:08:14.308 17:12:33 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.308 17:12:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 ************************************ 00:08:14.308 END TEST accel_decomp 00:08:14.308 ************************************ 00:08:14.308 17:12:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.308 17:12:33 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.308 17:12:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:14.308 17:12:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.308 17:12:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 ************************************ 00:08:14.308 START TEST accel_decomp_full 00:08:14.308 ************************************ 00:08:14.308 17:12:33 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:14.308 17:12:33 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:14.308 [2024-07-22 17:12:33.202131] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:14.308 [2024-07-22 17:12:33.202318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63119 ] 00:08:14.565 [2024-07-22 17:12:33.377366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.823 [2024-07-22 17:12:33.618065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.082 17:12:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.984 ************************************ 00:08:16.984 END TEST accel_decomp_full 00:08:16.984 ************************************ 00:08:16.984 17:12:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.984 00:08:16.984 real 0m2.596s 00:08:16.984 user 0m2.308s 00:08:16.984 sys 0m0.192s 00:08:16.984 17:12:35 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.984 17:12:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:16.984 17:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.984 17:12:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.984 17:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:16.984 17:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.984 17:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.984 ************************************ 00:08:16.984 START TEST accel_decomp_mcore 00:08:16.984 ************************************ 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:16.984 17:12:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:16.984 [2024-07-22 17:12:35.840109] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:16.984 [2024-07-22 17:12:35.840252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63167 ] 00:08:17.243 [2024-07-22 17:12:36.005355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.501 [2024-07-22 17:12:36.248250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.501 [2024-07-22 17:12:36.248400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.501 [2024-07-22 17:12:36.248646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.501 [2024-07-22 17:12:36.250162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.759 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.760 17:12:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.661 00:08:19.661 real 0m2.595s 00:08:19.661 user 0m0.017s 00:08:19.661 sys 0m0.003s 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.661 ************************************ 00:08:19.661 END TEST accel_decomp_mcore 00:08:19.661 ************************************ 00:08:19.661 17:12:38 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 17:12:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.661 17:12:38 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.661 17:12:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:19.661 17:12:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.661 17:12:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 ************************************ 00:08:19.661 START TEST accel_decomp_full_mcore 00:08:19.661 ************************************ 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:19.661 17:12:38 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:19.661 [2024-07-22 17:12:38.495419] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:19.661 [2024-07-22 17:12:38.495611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63211 ] 00:08:19.921 [2024-07-22 17:12:38.672488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.179 [2024-07-22 17:12:38.959239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.179 [2024-07-22 17:12:38.959353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.179 [2024-07-22 17:12:38.959493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.179 [2024-07-22 17:12:38.959579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.437 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.438 17:12:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.338 ************************************ 00:08:22.338 END TEST accel_decomp_full_mcore 00:08:22.338 ************************************ 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.338 00:08:22.338 real 0m2.743s 00:08:22.338 user 0m0.017s 00:08:22.338 sys 0m0.005s 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.338 17:12:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:22.338 17:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.338 17:12:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:22.338 17:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:22.338 17:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.338 17:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.338 ************************************ 00:08:22.338 START TEST accel_decomp_mthread 00:08:22.338 ************************************ 00:08:22.338 17:12:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:22.338 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:22.338 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:22.338 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.338 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:22.339 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:22.339 [2024-07-22 17:12:41.281263] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:22.339 [2024-07-22 17:12:41.281428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63266 ] 00:08:22.597 [2024-07-22 17:12:41.447723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.854 [2024-07-22 17:12:41.684156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.113 17:12:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.013 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.014 00:08:25.014 real 0m2.557s 00:08:25.014 user 0m2.286s 00:08:25.014 sys 0m0.175s 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.014 17:12:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 ************************************ 00:08:25.014 END TEST accel_decomp_mthread 00:08:25.014 ************************************ 00:08:25.014 17:12:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.014 17:12:43 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:25.014 17:12:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:25.014 17:12:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.014 17:12:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 ************************************ 00:08:25.014 START TEST accel_decomp_full_mthread 00:08:25.014 ************************************ 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:25.014 17:12:43 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:25.014 [2024-07-22 17:12:43.874474] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:25.014 [2024-07-22 17:12:43.874646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:08:25.272 [2024-07-22 17:12:44.039391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.531 [2024-07-22 17:12:44.278033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:25.790 17:12:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.691 00:08:27.691 real 0m2.599s 00:08:27.691 user 0m0.016s 00:08:27.691 sys 0m0.001s 00:08:27.691 ************************************ 00:08:27.691 END TEST accel_decomp_full_mthread 00:08:27.691 ************************************ 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.691 17:12:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:27.691 17:12:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.691 17:12:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:27.691 17:12:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:27.691 17:12:46 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:27.692 17:12:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:27.692 17:12:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.692 17:12:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.692 17:12:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.692 17:12:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.692 17:12:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.692 17:12:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.692 17:12:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.692 17:12:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:27.692 17:12:46 accel -- accel/accel.sh@41 -- # jq -r . 00:08:27.692 ************************************ 00:08:27.692 START TEST accel_dif_functional_tests 00:08:27.692 ************************************ 00:08:27.692 17:12:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:27.692 [2024-07-22 17:12:46.625276] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:27.692 [2024-07-22 17:12:46.625537] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:08:27.948 [2024-07-22 17:12:46.801109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.205 [2024-07-22 17:12:47.074302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.205 [2024-07-22 17:12:47.074397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.205 [2024-07-22 17:12:47.074411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.464 00:08:28.464 00:08:28.464 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.464 http://cunit.sourceforge.net/ 00:08:28.464 00:08:28.464 00:08:28.464 Suite: accel_dif 00:08:28.464 Test: verify: DIF generated, GUARD check ...passed 00:08:28.464 Test: verify: DIF generated, APPTAG check ...passed 00:08:28.464 Test: verify: DIF generated, REFTAG check ...passed 00:08:28.464 Test: verify: DIF not generated, GUARD check ...passed 00:08:28.464 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 17:12:47.393550] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:28.464 [2024-07-22 17:12:47.393695] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:28.464 passed 00:08:28.464 Test: verify: DIF not generated, REFTAG check ...passed 00:08:28.464 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:28.464 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 17:12:47.393851] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:28.464 [2024-07-22 17:12:47.394057] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:28.464 passed 00:08:28.464 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:28.464 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:28.464 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:28.464 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 17:12:47.394528] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:28.464 passed 00:08:28.464 Test: verify copy: DIF generated, GUARD check ...passed 00:08:28.464 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:28.464 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:28.464 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 17:12:47.395132] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:28.464 passed 00:08:28.464 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:28.464 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 17:12:47.395314] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:28.464 passed 00:08:28.464 Test: generate copy: DIF generated, GUARD check ...[2024-07-22 17:12:47.395431] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:28.464 passed 00:08:28.464 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:28.464 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:28.464 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:28.464 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:28.464 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:28.464 Test: generate copy: iovecs-len validate ...[2024-07-22 17:12:47.396343] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:28.464 passed 00:08:28.464 Test: generate copy: buffer alignment validate ...passed 00:08:28.464 00:08:28.464 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.464 suites 1 1 n/a 0 0 00:08:28.464 tests 26 26 26 0 0 00:08:28.464 asserts 115 115 115 0 n/a 00:08:28.464 00:08:28.464 Elapsed time = 0.007 seconds 00:08:29.840 00:08:29.840 real 0m2.175s 00:08:29.840 user 0m3.923s 00:08:29.840 sys 0m0.295s 00:08:29.840 ************************************ 00:08:29.840 END TEST accel_dif_functional_tests 00:08:29.840 ************************************ 00:08:29.840 17:12:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.840 17:12:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:29.840 17:12:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:29.840 00:08:29.840 real 1m3.520s 00:08:29.840 user 1m8.217s 00:08:29.840 sys 0m6.188s 00:08:29.840 ************************************ 00:08:29.840 17:12:48 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.840 17:12:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.840 END TEST accel 00:08:29.840 ************************************ 00:08:29.840 17:12:48 -- common/autotest_common.sh@1142 -- # return 0 00:08:29.840 17:12:48 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:29.840 17:12:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.840 17:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.840 17:12:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.840 ************************************ 00:08:29.840 START TEST accel_rpc 00:08:29.840 ************************************ 00:08:29.840 17:12:48 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:30.188 * Looking for test storage... 00:08:30.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:30.188 17:12:48 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:30.188 17:12:48 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63442 00:08:30.188 17:12:48 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:30.188 17:12:48 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63442 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63442 ']' 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.188 17:12:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.188 [2024-07-22 17:12:48.992709] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:30.188 [2024-07-22 17:12:48.993331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63442 ] 00:08:30.445 [2024-07-22 17:12:49.168851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.703 [2024-07-22 17:12:49.418553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.961 17:12:49 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.961 17:12:49 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:30.961 17:12:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:30.961 17:12:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:30.961 17:12:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:30.961 17:12:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:30.961 17:12:49 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:30.961 17:12:49 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.961 17:12:49 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.961 17:12:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.218 ************************************ 00:08:31.218 START TEST accel_assign_opcode 00:08:31.218 ************************************ 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.218 [2024-07-22 17:12:49.923543] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.218 [2024-07-22 17:12:49.931498] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.218 17:12:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:31.784 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.042 software 00:08:32.042 00:08:32.042 real 0m0.841s 00:08:32.042 user 0m0.053s 00:08:32.042 sys 0m0.012s 00:08:32.042 ************************************ 00:08:32.042 END TEST accel_assign_opcode 00:08:32.042 ************************************ 00:08:32.042 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.042 17:12:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:32.042 17:12:50 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63442 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63442 ']' 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63442 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63442 00:08:32.042 killing process with pid 63442 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63442' 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@967 -- # kill 63442 00:08:32.042 17:12:50 accel_rpc -- common/autotest_common.sh@972 -- # wait 63442 00:08:34.572 00:08:34.572 real 0m4.334s 00:08:34.572 user 0m4.311s 00:08:34.572 sys 0m0.591s 00:08:34.572 17:12:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.572 ************************************ 00:08:34.572 END TEST accel_rpc 00:08:34.572 ************************************ 00:08:34.572 17:12:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.572 17:12:53 -- common/autotest_common.sh@1142 -- # return 0 00:08:34.572 17:12:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.572 17:12:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.572 17:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.572 17:12:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.572 ************************************ 00:08:34.572 START TEST app_cmdline 00:08:34.572 ************************************ 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:34.572 * Looking for test storage... 00:08:34.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:34.572 17:12:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:34.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.572 17:12:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63564 00:08:34.572 17:12:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63564 00:08:34.572 17:12:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63564 ']' 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.572 17:12:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.572 [2024-07-22 17:12:53.369883] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:34.572 [2024-07-22 17:12:53.370135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63564 ] 00:08:34.829 [2024-07-22 17:12:53.541417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.086 [2024-07-22 17:12:53.798812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.651 17:12:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.909 17:12:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:35.909 17:12:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:35.909 { 00:08:35.909 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:08:35.909 "fields": { 00:08:35.909 "major": 24, 00:08:35.909 "minor": 9, 00:08:35.909 "patch": 0, 00:08:35.909 "suffix": "-pre", 00:08:35.909 "commit": "f7b31b2b9" 00:08:35.909 } 00:08:35.909 } 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:36.167 17:12:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:36.167 17:12:54 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.426 request: 00:08:36.426 { 00:08:36.426 "method": "env_dpdk_get_mem_stats", 00:08:36.426 "req_id": 1 00:08:36.426 } 00:08:36.426 Got JSON-RPC error response 00:08:36.426 response: 00:08:36.426 { 00:08:36.426 "code": -32601, 00:08:36.426 "message": "Method not found" 00:08:36.426 } 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:36.426 17:12:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63564 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63564 ']' 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63564 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63564 00:08:36.426 killing process with pid 63564 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63564' 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 63564 00:08:36.426 17:12:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 63564 00:08:38.993 ************************************ 00:08:38.993 END TEST app_cmdline 00:08:38.993 ************************************ 00:08:38.993 00:08:38.993 real 0m4.363s 00:08:38.993 user 0m4.781s 00:08:38.993 sys 0m0.647s 00:08:38.993 17:12:57 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.993 17:12:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.993 17:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:08:38.993 17:12:57 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.993 17:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.993 17:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.993 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.993 ************************************ 00:08:38.993 START TEST version 00:08:38.993 ************************************ 00:08:38.993 17:12:57 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.993 * Looking for test storage... 00:08:38.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:38.993 17:12:57 version -- app/version.sh@17 -- # get_header_version major 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # cut -f2 00:08:38.993 17:12:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.993 17:12:57 version -- app/version.sh@17 -- # major=24 00:08:38.993 17:12:57 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.993 17:12:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # cut -f2 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.993 17:12:57 version -- app/version.sh@18 -- # minor=9 00:08:38.993 17:12:57 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.993 17:12:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # cut -f2 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.993 17:12:57 version -- app/version.sh@19 -- # patch=0 00:08:38.993 17:12:57 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.993 17:12:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # cut -f2 00:08:38.993 17:12:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.993 17:12:57 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.993 17:12:57 version -- app/version.sh@22 -- # version=24.9 00:08:38.993 17:12:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.993 17:12:57 version -- app/version.sh@28 -- # version=24.9rc0 00:08:38.993 17:12:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:38.993 17:12:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.993 17:12:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:38.993 17:12:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:38.993 00:08:38.993 real 0m0.146s 00:08:38.993 user 0m0.081s 00:08:38.993 sys 0m0.095s 00:08:38.993 17:12:57 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.993 17:12:57 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.993 ************************************ 00:08:38.993 END TEST version 00:08:38.993 ************************************ 00:08:38.993 17:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:08:38.993 17:12:57 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:38.993 17:12:57 -- spdk/autotest.sh@198 -- # uname -s 00:08:38.993 17:12:57 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:38.993 17:12:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:38.994 17:12:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:38.994 17:12:57 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:38.994 17:12:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:38.994 17:12:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:38.994 17:12:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.994 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.994 17:12:57 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:08:38.994 17:12:57 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:38.994 17:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.994 17:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.994 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.994 ************************************ 00:08:38.994 START TEST iscsi_tgt 00:08:38.994 ************************************ 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:08:38.994 * Looking for test storage... 00:08:38.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:38.994 Cleaning up iSCSI connection 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:08:38.994 iscsiadm: No matching sessions found 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:08:38.994 iscsiadm: No records found 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:08:38.994 Cannot find device "init_br" 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:08:38.994 Cannot find device "tgt_br" 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:08:38.994 Cannot find device "tgt_br2" 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:08:38.994 Cannot find device "init_br" 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:08:38.994 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:08:39.256 Cannot find device "tgt_br" 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:08:39.256 Cannot find device "tgt_br2" 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:08:39.256 Cannot find device "iscsi_br" 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:08:39.256 Cannot find device "spdk_init_int" 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:08:39.256 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:08:39.256 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:08:39.256 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:08:39.256 17:12:57 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:08:39.256 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:08:39.529 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:08:39.529 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:08:39.529 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:08:39.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:08:39.530 00:08:39.530 --- 10.0.0.1 ping statistics --- 00:08:39.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.530 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:08:39.530 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.530 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:39.530 00:08:39.530 --- 10.0.0.3 ping statistics --- 00:08:39.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.530 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:39.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:08:39.530 00:08:39.530 --- 10.0.0.2 ping statistics --- 00:08:39.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.530 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:08:39.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:08:39.530 00:08:39.530 --- 10.0.0.2 ping statistics --- 00:08:39.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.530 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:08:39.530 17:12:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:39.530 17:12:58 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.530 17:12:58 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.530 17:12:58 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:39.530 ************************************ 00:08:39.530 START TEST iscsi_tgt_sock 00:08:39.530 ************************************ 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:08:39.530 * Looking for test storage... 00:08:39.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:08:39.530 Testing client path 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=63904 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 63904 10.0.0.2:3260 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:08:39.530 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:39.530 17:12:58 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:08:40.097 [2024-07-22 17:12:58.922583] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:40.097 [2024-07-22 17:12:58.922964] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63908 ] 00:08:40.355 [2024-07-22 17:12:59.111085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.614 [2024-07-22 17:12:59.415726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.614 [2024-07-22 17:12:59.415840] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:40.614 [2024-07-22 17:12:59.415902] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:40.614 [2024-07-22 17:12:59.416214] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 33102) 00:08:40.614 [2024-07-22 17:12:59.416407] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:41.549 [2024-07-22 17:13:00.416477] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:41.549 [2024-07-22 17:13:00.416736] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:42.130 [2024-07-22 17:13:00.888405] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:42.130 [2024-07-22 17:13:00.888653] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63939 ] 00:08:42.130 [2024-07-22 17:13:01.063032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.389 [2024-07-22 17:13:01.317958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.389 [2024-07-22 17:13:01.318070] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:42.389 [2024-07-22 17:13:01.318118] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:42.389 [2024-07-22 17:13:01.318356] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 35434) 00:08:42.389 [2024-07-22 17:13:01.318498] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:43.765 [2024-07-22 17:13:02.318532] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:43.765 [2024-07-22 17:13:02.318789] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:44.023 [2024-07-22 17:13:02.782589] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.023 [2024-07-22 17:13:02.782796] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63970 ] 00:08:44.023 [2024-07-22 17:13:02.949158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.282 [2024-07-22 17:13:03.196134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.282 [2024-07-22 17:13:03.196222] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:44.282 [2024-07-22 17:13:03.196269] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:08:44.282 [2024-07-22 17:13:03.196627] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 35446) 00:08:44.282 [2024-07-22 17:13:03.196740] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:45.658 [2024-07-22 17:13:04.196797] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:45.658 [2024-07-22 17:13:04.197050] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:45.916 killing process with pid 63904 00:08:45.916 Testing SSL server path 00:08:45.916 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:08:45.916 [2024-07-22 17:13:04.722539] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:45.916 [2024-07-22 17:13:04.722708] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64020 ] 00:08:46.174 [2024-07-22 17:13:04.885724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.431 [2024-07-22 17:13:05.130816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.431 [2024-07-22 17:13:05.130931] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:46.431 [2024-07-22 17:13:05.131064] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:08:46.431 [2024-07-22 17:13:05.253882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:46.431 [2024-07-22 17:13:05.254149] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64025 ] 00:08:46.689 [2024-07-22 17:13:05.451386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.947 [2024-07-22 17:13:05.712037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.947 [2024-07-22 17:13:05.712140] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:46.947 [2024-07-22 17:13:05.712200] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:46.947 [2024-07-22 17:13:05.718375] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 58644) 00:08:46.947 [2024-07-22 17:13:05.718442] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 58644) to (10.0.0.1, 3260) 00:08:46.947 [2024-07-22 17:13:05.722256] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:47.881 [2024-07-22 17:13:06.722314] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:47.881 [2024-07-22 17:13:06.722522] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:47.881 [2024-07-22 17:13:06.722703] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:48.455 [2024-07-22 17:13:07.188905] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:48.455 [2024-07-22 17:13:07.189175] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64054 ] 00:08:48.455 [2024-07-22 17:13:07.369805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.712 [2024-07-22 17:13:07.618854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.712 [2024-07-22 17:13:07.619112] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:48.712 [2024-07-22 17:13:07.619278] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:48.712 [2024-07-22 17:13:07.621607] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 58660) to (10.0.0.1, 3260) 00:08:48.712 [2024-07-22 17:13:07.625246] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 58660) 00:08:48.712 [2024-07-22 17:13:07.628429] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:50.085 [2024-07-22 17:13:08.628605] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:50.086 [2024-07-22 17:13:08.629083] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:50.086 [2024-07-22 17:13:08.629274] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:50.344 [2024-07-22 17:13:09.093836] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:50.344 [2024-07-22 17:13:09.094320] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64082 ] 00:08:50.344 [2024-07-22 17:13:09.270087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.602 [2024-07-22 17:13:09.512376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.602 [2024-07-22 17:13:09.512685] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:50.602 [2024-07-22 17:13:09.512886] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:50.602 [2024-07-22 17:13:09.514580] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 58666) to (10.0.0.1, 3260) 00:08:50.602 [2024-07-22 17:13:09.518783] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:08:50.602 [2024-07-22 17:13:09.518889] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:08:50.602 [2024-07-22 17:13:09.518954] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:08:50.602 [2024-07-22 17:13:09.518974] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.602 [2024-07-22 17:13:09.519050] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:50.602 [2024-07-22 17:13:09.519068] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:50.602 [2024-07-22 17:13:09.519123] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:51.168 [2024-07-22 17:13:09.981778] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:51.168 [2024-07-22 17:13:09.982037] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64098 ] 00:08:51.427 [2024-07-22 17:13:10.170778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.686 [2024-07-22 17:13:10.428860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.686 [2024-07-22 17:13:10.429232] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:51.686 [2024-07-22 17:13:10.429401] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:51.686 [2024-07-22 17:13:10.431346] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52984) to (10.0.0.1, 3260) 00:08:51.686 [2024-07-22 17:13:10.435179] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52984) 00:08:51.686 [2024-07-22 17:13:10.438735] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:08:52.621 [2024-07-22 17:13:11.438925] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:52.621 [2024-07-22 17:13:11.439390] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:52.621 [2024-07-22 17:13:11.439542] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:53.196 SSL_connect:before SSL initialization 00:08:53.196 SSL_connect:SSLv3/TLS write client hello 00:08:53.196 [2024-07-22 17:13:11.941504] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 50124) to (10.0.0.1, 3260) 00:08:53.196 SSL_connect:SSLv3/TLS write client hello 00:08:53.196 SSL_connect:SSLv3/TLS read server hello 00:08:53.196 Can't use SSL_get_servername 00:08:53.196 SSL_connect:TLSv1.3 read encrypted extensions 00:08:53.196 SSL_connect:SSLv3/TLS read finished 00:08:53.196 SSL_connect:SSLv3/TLS write change cipher spec 00:08:53.196 SSL_connect:SSLv3/TLS write finished 00:08:53.196 SSL_connect:SSL negotiation finished successfully 00:08:53.196 SSL_connect:SSL negotiation finished successfully 00:08:53.196 SSL_connect:SSLv3/TLS read server session ticket 00:08:55.095 DONE 00:08:55.095 SSL3 alert write:warning:close notify 00:08:55.095 [2024-07-22 17:13:13.872929] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:55.095 [2024-07-22 17:13:13.936891] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:55.095 [2024-07-22 17:13:13.937086] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64149 ] 00:08:55.353 [2024-07-22 17:13:14.112192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.611 [2024-07-22 17:13:14.397791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.611 [2024-07-22 17:13:14.399587] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:55.611 [2024-07-22 17:13:14.399654] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:55.611 [2024-07-22 17:13:14.401225] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52986) to (10.0.0.1, 3260) 00:08:55.611 [2024-07-22 17:13:14.406548] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52986) 00:08:55.611 [2024-07-22 17:13:14.408282] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:08:55.611 [2024-07-22 17:13:14.408288] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:55.611 [2024-07-22 17:13:14.408381] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:08:56.547 [2024-07-22 17:13:15.408358] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:56.547 [2024-07-22 17:13:15.408824] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.547 [2024-07-22 17:13:15.408930] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:56.547 [2024-07-22 17:13:15.408983] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:57.113 [2024-07-22 17:13:15.916478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:57.113 [2024-07-22 17:13:15.916702] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64175 ] 00:08:57.372 [2024-07-22 17:13:16.091149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.631 [2024-07-22 17:13:16.370391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.631 [2024-07-22 17:13:16.370738] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:08:57.631 [2024-07-22 17:13:16.370924] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:08:57.631 [2024-07-22 17:13:16.372694] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 52992) to (10.0.0.1, 3260) 00:08:57.631 [2024-07-22 17:13:16.376983] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 52992) 00:08:57.631 [2024-07-22 17:13:16.378304] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:08:57.631 [2024-07-22 17:13:16.378548] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:57.631 [2024-07-22 17:13:16.378549] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:08:57.631 [2024-07-22 17:13:16.378616] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:08:58.565 [2024-07-22 17:13:17.378598] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:58.565 [2024-07-22 17:13:17.379141] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.565 [2024-07-22 17:13:17.379366] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:08:58.565 [2024-07-22 17:13:17.379516] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:59.130 killing process with pid 64020 00:09:00.064 [2024-07-22 17:13:18.803431] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:00.064 [2024-07-22 17:13:18.803915] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:00.322 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:09:00.579 [2024-07-22 17:13:19.316736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:00.579 [2024-07-22 17:13:19.316927] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64233 ] 00:09:00.579 [2024-07-22 17:13:19.493439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.837 [2024-07-22 17:13:19.734564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.837 [2024-07-22 17:13:19.734662] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:00.837 [2024-07-22 17:13:19.734785] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:09:00.837 [2024-07-22 17:13:19.782970] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 50134) to (10.0.0.1, 3260) 00:09:00.837 [2024-07-22 17:13:19.783142] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:01.095 killing process with pid 64233 00:09:02.029 [2024-07-22 17:13:20.806960] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:02.029 [2024-07-22 17:13:20.807188] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:02.597 ************************************ 00:09:02.597 END TEST iscsi_tgt_sock 00:09:02.597 ************************************ 00:09:02.597 00:09:02.597 real 0m22.980s 00:09:02.597 user 0m29.017s 00:09:02.597 sys 0m2.866s 00:09:02.597 17:13:21 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.597 17:13:21 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:09:02.597 17:13:21 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:09:02.597 17:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:09:02.597 17:13:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:02.598 17:13:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.598 17:13:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.598 17:13:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 ************************************ 00:09:02.598 START TEST iscsi_tgt_calsoft 00:09:02.598 ************************************ 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:02.598 * Looking for test storage... 00:09:02.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 Process pid: 64326 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=64326 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 64326' 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 64326 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 64326 ']' 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.598 17:13:21 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:02.857 [2024-07-22 17:13:21.559429] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:02.857 [2024-07-22 17:13:21.559747] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64326 ] 00:09:02.857 [2024-07-22 17:13:21.726094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.115 [2024-07-22 17:13:21.992856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.680 17:13:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.680 17:13:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:09:03.680 17:13:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:09:03.680 17:13:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:04.690 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:04.690 iscsi_tgt is listening. Running tests... 00:09:04.690 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:09:04.690 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.690 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:04.690 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:09:05.257 17:13:23 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:09:05.257 17:13:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:05.823 17:13:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:05.823 17:13:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:09:06.082 MyBdev 00:09:06.082 17:13:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:09:06.340 17:13:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:09:07.733 17:13:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:09:07.733 17:13:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:09:07.733 [2024-07-22 17:13:26.336413] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:07.733 [2024-07-22 17:13:26.360937] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:09:07.733 [2024-07-22 17:13:26.361169] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:07.733 [2024-07-22 17:13:26.402954] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:07.733 [2024-07-22 17:13:26.422283] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:07.734 [2024-07-22 17:13:26.444641] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:07.734 [2024-07-22 17:13:26.444823] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:07.734 [2024-07-22 17:13:26.483255] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:07.734 [2024-07-22 17:13:26.483478] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:07.734 [2024-07-22 17:13:26.505517] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:07.992 [2024-07-22 17:13:26.867368] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:07.992 [2024-07-22 17:13:26.867529] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:07.992 [2024-07-22 17:13:26.888665] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:07.992 [2024-07-22 17:13:26.888834] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:07.992 [2024-07-22 17:13:26.908012] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:07.992 [2024-07-22 17:13:26.908212] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:07.992 [2024-07-22 17:13:26.930272] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:08.251 [2024-07-22 17:13:26.967482] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:08.251 [2024-07-22 17:13:27.063255] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:08.251 [2024-07-22 17:13:27.098277] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:08.251 [2024-07-22 17:13:27.098447] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:08.251 [2024-07-22 17:13:27.136137] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:08.251 [2024-07-22 17:13:27.136458] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:08.251 [2024-07-22 17:13:27.196281] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:08.510 [2024-07-22 17:13:27.236032] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:08.510 [2024-07-22 17:13:27.236270] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:08.510 [2024-07-22 17:13:27.256308] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:08.510 [2024-07-22 17:13:27.256589] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:08.510 [2024-07-22 17:13:27.277968] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:08.510 [2024-07-22 17:13:27.298195] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:08.510 [2024-07-22 17:13:27.321002] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:08.510 [2024-07-22 17:13:27.340701] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:08.510 [2024-07-22 17:13:27.340846] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:08.510 [2024-07-22 17:13:27.341001] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:08.510 [2024-07-22 17:13:27.414151] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:08.510 [2024-07-22 17:13:27.414296] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:08.768 [2024-07-22 17:13:27.475194] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:08.768 [2024-07-22 17:13:27.475386] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:08.768 [2024-07-22 17:13:27.475798] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:08.768 [2024-07-22 17:13:27.494144] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:08.768 [2024-07-22 17:13:27.553231] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:08.768 [2024-07-22 17:13:27.653417] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:08.768 [2024-07-22 17:13:27.673817] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:08.768 [2024-07-22 17:13:27.691745] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:08.768 [2024-07-22 17:13:27.691931] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.026 [2024-07-22 17:13:27.796308] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:09.026 [2024-07-22 17:13:27.796465] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.026 [2024-07-22 17:13:27.851099] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:09.026 [2024-07-22 17:13:27.890829] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:09.026 [2024-07-22 17:13:27.891014] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:09:09.026 [2024-07-22 17:13:27.891854] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:09.026 [2024-07-22 17:13:27.912662] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:09.026 [2024-07-22 17:13:27.912839] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.026 [2024-07-22 17:13:27.932113] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:09.285 [2024-07-22 17:13:27.987414] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:09.285 [2024-07-22 17:13:28.047614] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:09.285 [2024-07-22 17:13:28.047690] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:09.285 [2024-07-22 17:13:28.047720] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:09.285 [2024-07-22 17:13:28.070096] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:09.285 [2024-07-22 17:13:28.108142] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:09:09.285 [2024-07-22 17:13:28.108217] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:09:09.285 [2024-07-22 17:13:28.150625] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:09.285 [2024-07-22 17:13:28.150695] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:09:09.285 [2024-07-22 17:13:28.150727] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:09.285 [2024-07-22 17:13:28.150742] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:09.285 [2024-07-22 17:13:28.219052] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:09.543 [2024-07-22 17:13:28.236524] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:09.543 [2024-07-22 17:13:28.236711] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:09.543 [2024-07-22 17:13:28.237019] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:09.544 [2024-07-22 17:13:28.237183] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:09:09.544 [2024-07-22 17:13:28.238161] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:09.544 [2024-07-22 17:13:28.270386] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:09.544 [2024-07-22 17:13:28.371035] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:09:09.544 [2024-07-22 17:13:28.393386] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.544 [2024-07-22 17:13:28.467878] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:09.544 [2024-07-22 17:13:28.468090] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.802 [2024-07-22 17:13:28.505437] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:09.802 [2024-07-22 17:13:28.505797] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.802 [2024-07-22 17:13:28.526606] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:09.802 [2024-07-22 17:13:28.547906] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:09.802 [2024-07-22 17:13:28.570269] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:09.802 [2024-07-22 17:13:28.628628] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:09:09.802 [2024-07-22 17:13:28.628867] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:09.802 [2024-07-22 17:13:28.629168] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:09.802 [2024-07-22 17:13:28.629313] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:09.802 [2024-07-22 17:13:28.649406] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:09.802 [2024-07-22 17:13:28.649577] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.802 [2024-07-22 17:13:28.670439] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:09.802 [2024-07-22 17:13:28.670622] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.802 [2024-07-22 17:13:28.691116] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:09.802 [2024-07-22 17:13:28.691285] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:09.802 [2024-07-22 17:13:28.743890] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:10.061 [2024-07-22 17:13:28.763436] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.061 [2024-07-22 17:13:28.763612] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.061 [2024-07-22 17:13:28.837974] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:10.061 [2024-07-22 17:13:28.860909] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:10.061 [2024-07-22 17:13:28.877141] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:10.061 PDU 00:09:10.061 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:10.061 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:10.061 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:10.061 [2024-07-22 17:13:28.877256] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:10.061 [2024-07-22 17:13:28.900641] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:10.061 [2024-07-22 17:13:28.916822] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:10.061 [2024-07-22 17:13:28.994422] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.061 [2024-07-22 17:13:28.994604] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.014350] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:10.321 [2024-07-22 17:13:29.014549] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.035711] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:10.321 [2024-07-22 17:13:29.036087] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.074457] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.321 [2024-07-22 17:13:29.074633] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.096574] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.321 [2024-07-22 17:13:29.096740] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.117061] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:10.321 [2024-07-22 17:13:29.138339] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:10.321 [2024-07-22 17:13:29.138528] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:10.321 [2024-07-22 17:13:29.160587] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:10.321 [2024-07-22 17:13:29.182404] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.321 [2024-07-22 17:13:29.182604] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.321 [2024-07-22 17:13:29.196081] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:10.321 [2024-07-22 17:13:29.214556] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:10.321 [2024-07-22 17:13:29.251656] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:10.321 [2024-07-22 17:13:29.251815] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.271951] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.580 [2024-07-22 17:13:29.272112] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.325712] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.580 [2024-07-22 17:13:29.325890] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.346880] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.580 [2024-07-22 17:13:29.347059] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.368781] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.580 [2024-07-22 17:13:29.368974] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.390117] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:10.580 [2024-07-22 17:13:29.390288] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.412196] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:10.580 [2024-07-22 17:13:29.449811] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:10.580 [2024-07-22 17:13:29.449995] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:10.580 [2024-07-22 17:13:29.470298] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:10.580 [2024-07-22 17:13:29.491805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:10.839 [2024-07-22 17:13:29.533686] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:10.839 [2024-07-22 17:13:29.572297] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:09:10.839 [2024-07-22 17:13:29.587999] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:10.839 PDU 00:09:10.839 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:10.839 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:10.839 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:10.839 [2024-07-22 17:13:29.588105] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:10.839 [2024-07-22 17:13:29.649121] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:12.748 [2024-07-22 17:13:31.609500] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:12.748 [2024-07-22 17:13:31.668400] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:12.748 [2024-07-22 17:13:31.689974] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.006 [2024-07-22 17:13:31.734995] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:13.006 [2024-07-22 17:13:31.829915] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:13.006 [2024-07-22 17:13:31.885640] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.006 [2024-07-22 17:13:31.885831] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.006 [2024-07-22 17:13:31.908575] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:13.006 [2024-07-22 17:13:31.908741] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:13.265 [2024-07-22 17:13:31.974347] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:13.265 [2024-07-22 17:13:32.013469] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:14.201 [2024-07-22 17:13:33.054406] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:15.154 [2024-07-22 17:13:34.035141] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:09:15.154 [2024-07-22 17:13:34.035646] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:09:15.154 [2024-07-22 17:13:34.054646] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:09:16.530 [2024-07-22 17:13:35.054924] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:09:16.530 [2024-07-22 17:13:35.055199] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:09:16.530 [2024-07-22 17:13:35.055227] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:09:16.530 [2024-07-22 17:13:35.055254] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:09:28.749 [2024-07-22 17:13:47.102646] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:28.749 [2024-07-22 17:13:47.124273] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:28.749 [2024-07-22 17:13:47.143340] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:28.749 [2024-07-22 17:13:47.144684] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:28.749 [2024-07-22 17:13:47.165412] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:28.749 [2024-07-22 17:13:47.185294] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:28.749 [2024-07-22 17:13:47.205694] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:28.749 [2024-07-22 17:13:47.246384] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:28.749 [2024-07-22 17:13:47.248705] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:09:28.749 [2024-07-22 17:13:47.274183] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:09:28.749 [2024-07-22 17:13:47.289396] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:28.749 [2024-07-22 17:13:47.315399] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:09:28.749 Skipping tc_ffp_15_2. It is known to fail. 00:09:28.749 Skipping tc_ffp_29_2. It is known to fail. 00:09:28.749 Skipping tc_ffp_29_3. It is known to fail. 00:09:28.749 Skipping tc_ffp_29_4. It is known to fail. 00:09:28.749 Skipping tc_err_1_1. It is known to fail. 00:09:28.749 Skipping tc_err_1_2. It is known to fail. 00:09:28.749 Skipping tc_err_2_8. It is known to fail. 00:09:28.749 Skipping tc_err_3_1. It is known to fail. 00:09:28.749 Skipping tc_err_3_2. It is known to fail. 00:09:28.749 Skipping tc_err_3_3. It is known to fail. 00:09:28.749 Skipping tc_err_3_4. It is known to fail. 00:09:28.749 Skipping tc_err_5_1. It is known to fail. 00:09:28.749 Skipping tc_login_3_1. It is known to fail. 00:09:28.749 Skipping tc_login_11_2. It is known to fail. 00:09:28.749 Skipping tc_login_11_4. It is known to fail. 00:09:28.749 Skipping tc_login_2_2. It is known to fail. 00:09:28.749 Skipping tc_login_29_1. It is known to fail. 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:09:28.749 Cleaning up iSCSI connection 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:09:28.749 iscsiadm: No matching sessions found 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:09:28.749 iscsiadm: No records found 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 64326 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 64326 ']' 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 64326 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64326 00:09:28.749 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.750 killing process with pid 64326 00:09:28.750 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.750 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64326' 00:09:28.750 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 64326 00:09:28.750 17:13:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 64326 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:09:31.283 00:09:31.283 real 0m28.549s 00:09:31.283 user 0m44.376s 00:09:31.283 sys 0m2.674s 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:31.283 ************************************ 00:09:31.283 END TEST iscsi_tgt_calsoft 00:09:31.283 ************************************ 00:09:31.283 17:13:49 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:09:31.283 17:13:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:31.283 17:13:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.283 17:13:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.283 17:13:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:31.283 ************************************ 00:09:31.283 START TEST iscsi_tgt_filesystem 00:09:31.283 ************************************ 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:31.283 * Looking for test storage... 00:09:31.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:31.283 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:31.284 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:31.284 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:31.284 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:31.284 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:31.284 17:13:49 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:31.284 #define SPDK_CONFIG_H 00:09:31.284 #define SPDK_CONFIG_APPS 1 00:09:31.284 #define SPDK_CONFIG_ARCH native 00:09:31.284 #define SPDK_CONFIG_ASAN 1 00:09:31.284 #undef SPDK_CONFIG_AVAHI 00:09:31.284 #undef SPDK_CONFIG_CET 00:09:31.284 #define SPDK_CONFIG_COVERAGE 1 00:09:31.284 #define SPDK_CONFIG_CROSS_PREFIX 00:09:31.284 #undef SPDK_CONFIG_CRYPTO 00:09:31.284 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:31.284 #undef SPDK_CONFIG_CUSTOMOCF 00:09:31.284 #undef SPDK_CONFIG_DAOS 00:09:31.284 #define SPDK_CONFIG_DAOS_DIR 00:09:31.284 #define SPDK_CONFIG_DEBUG 1 00:09:31.284 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:31.284 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:31.284 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:31.284 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:31.284 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:31.284 #undef SPDK_CONFIG_DPDK_UADK 00:09:31.284 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:31.284 #define SPDK_CONFIG_EXAMPLES 1 00:09:31.284 #undef SPDK_CONFIG_FC 00:09:31.284 #define SPDK_CONFIG_FC_PATH 00:09:31.284 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:31.284 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:31.284 #undef SPDK_CONFIG_FUSE 00:09:31.284 #undef SPDK_CONFIG_FUZZER 00:09:31.284 #define SPDK_CONFIG_FUZZER_LIB 00:09:31.284 #undef SPDK_CONFIG_GOLANG 00:09:31.284 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:31.284 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:31.284 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:31.284 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:31.284 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:31.284 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:31.284 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:31.284 #define SPDK_CONFIG_IDXD 1 00:09:31.284 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:31.284 #undef SPDK_CONFIG_IPSEC_MB 00:09:31.284 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:31.284 #define SPDK_CONFIG_ISAL 1 00:09:31.284 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:31.284 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:31.284 #define SPDK_CONFIG_LIBDIR 00:09:31.284 #undef SPDK_CONFIG_LTO 00:09:31.284 #define SPDK_CONFIG_MAX_LCORES 128 00:09:31.284 #define SPDK_CONFIG_NVME_CUSE 1 00:09:31.284 #undef SPDK_CONFIG_OCF 00:09:31.284 #define SPDK_CONFIG_OCF_PATH 00:09:31.284 #define SPDK_CONFIG_OPENSSL_PATH 00:09:31.284 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:31.284 #define SPDK_CONFIG_PGO_DIR 00:09:31.284 #undef SPDK_CONFIG_PGO_USE 00:09:31.284 #define SPDK_CONFIG_PREFIX /usr/local 00:09:31.284 #undef SPDK_CONFIG_RAID5F 00:09:31.284 #define SPDK_CONFIG_RBD 1 00:09:31.284 #define SPDK_CONFIG_RDMA 1 00:09:31.284 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:31.284 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:31.284 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:31.284 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:31.284 #define SPDK_CONFIG_SHARED 1 00:09:31.284 #undef SPDK_CONFIG_SMA 00:09:31.284 #define SPDK_CONFIG_TESTS 1 00:09:31.284 #undef SPDK_CONFIG_TSAN 00:09:31.284 #define SPDK_CONFIG_UBLK 1 00:09:31.284 #define SPDK_CONFIG_UBSAN 1 00:09:31.284 #undef SPDK_CONFIG_UNIT_TESTS 00:09:31.284 #undef SPDK_CONFIG_URING 00:09:31.284 #define SPDK_CONFIG_URING_PATH 00:09:31.284 #undef SPDK_CONFIG_URING_ZNS 00:09:31.284 #undef SPDK_CONFIG_USDT 00:09:31.284 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:31.284 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:31.284 #undef SPDK_CONFIG_VFIO_USER 00:09:31.284 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:31.284 #define SPDK_CONFIG_VHOST 1 00:09:31.284 #define SPDK_CONFIG_VIRTIO 1 00:09:31.284 #undef SPDK_CONFIG_VTUNE 00:09:31.284 #define SPDK_CONFIG_VTUNE_DIR 00:09:31.284 #define SPDK_CONFIG_WERROR 1 00:09:31.284 #define SPDK_CONFIG_WPDK_DIR 00:09:31.284 #undef SPDK_CONFIG_XNVME 00:09:31.284 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:31.284 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:31.285 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65072 ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 65072 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.RRnLRP 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.RRnLRP/tests/filesystem /tmp/spdk.RRnLRP 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6263177216 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:31.286 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788872704 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5240061952 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788872704 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5240061952 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93575061504 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6127718400 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:31.287 * Looking for test storage... 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13788872704 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:31.287 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=65115 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 65115' 00:09:31.288 Process pid: 65115 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 65115 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 65115 ']' 00:09:31.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.288 17:13:50 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.546 [2024-07-22 17:13:50.313042] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:31.546 [2024-07-22 17:13:50.313248] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65115 ] 00:09:31.804 [2024-07-22 17:13:50.499545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.062 [2024-07-22 17:13:50.784347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.062 [2024-07-22 17:13:50.784513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.062 [2024-07-22 17:13:50.784634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.062 [2024-07-22 17:13:50.784804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.321 17:13:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 iscsi_tgt is listening. Running tests... 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 Nvme0n1 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.256 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=f6e2470c-e105-4c40-b743-eba8d006ca36 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb f6e2470c-e105-4c40-b743-eba8d006ca36 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=f6e2470c-e105-4c40-b743-eba8d006ca36 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:09:33.515 { 00:09:33.515 "uuid": "f6e2470c-e105-4c40-b743-eba8d006ca36", 00:09:33.515 "name": "lvs_0", 00:09:33.515 "base_bdev": "Nvme0n1", 00:09:33.515 "total_data_clusters": 1278, 00:09:33.515 "free_clusters": 1278, 00:09:33.515 "block_size": 4096, 00:09:33.515 "cluster_size": 4194304 00:09:33.515 } 00:09:33.515 ]' 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f6e2470c-e105-4c40-b743-eba8d006ca36") .free_clusters' 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f6e2470c-e105-4c40-b743-eba8d006ca36") .cluster_size' 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u f6e2470c-e105-4c40-b743-eba8d006ca36 lbd_0 2048 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 f57cc285-9bd5-44e4-a3ec-57136926127f 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.515 17:13:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:09:34.450 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:34.709 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:34.709 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:34.709 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:34.709 [2024-07-22 17:13:53.445592] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:34.709 { 00:09:34.709 "name": "f57cc285-9bd5-44e4-a3ec-57136926127f", 00:09:34.709 "aliases": [ 00:09:34.709 "lvs_0/lbd_0" 00:09:34.709 ], 00:09:34.709 "product_name": "Logical Volume", 00:09:34.709 "block_size": 4096, 00:09:34.709 "num_blocks": 524288, 00:09:34.709 "uuid": "f57cc285-9bd5-44e4-a3ec-57136926127f", 00:09:34.709 "assigned_rate_limits": { 00:09:34.709 "rw_ios_per_sec": 0, 00:09:34.709 "rw_mbytes_per_sec": 0, 00:09:34.709 "r_mbytes_per_sec": 0, 00:09:34.709 "w_mbytes_per_sec": 0 00:09:34.709 }, 00:09:34.709 "claimed": false, 00:09:34.709 "zoned": false, 00:09:34.709 "supported_io_types": { 00:09:34.709 "read": true, 00:09:34.709 "write": true, 00:09:34.709 "unmap": true, 00:09:34.709 "flush": false, 00:09:34.709 "reset": true, 00:09:34.709 "nvme_admin": false, 00:09:34.709 "nvme_io": false, 00:09:34.709 "nvme_io_md": false, 00:09:34.709 "write_zeroes": true, 00:09:34.709 "zcopy": false, 00:09:34.709 "get_zone_info": false, 00:09:34.709 "zone_management": false, 00:09:34.709 "zone_append": false, 00:09:34.709 "compare": false, 00:09:34.709 "compare_and_write": false, 00:09:34.709 "abort": false, 00:09:34.709 "seek_hole": true, 00:09:34.709 "seek_data": true, 00:09:34.709 "copy": false, 00:09:34.709 "nvme_iov_md": false 00:09:34.709 }, 00:09:34.709 "driver_specific": { 00:09:34.709 "lvol": { 00:09:34.709 "lvol_store_uuid": "f6e2470c-e105-4c40-b743-eba8d006ca36", 00:09:34.709 "base_bdev": "Nvme0n1", 00:09:34.709 "thin_provision": false, 00:09:34.709 "num_allocated_clusters": 512, 00:09:34.709 "snapshot": false, 00:09:34.709 "clone": false, 00:09:34.709 "esnap_clone": false 00:09:34.709 } 00:09:34.709 } 00:09:34.709 } 00:09:34.709 ]' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:34.709 17:13:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:09:34.709 [2024-07-22 17:13:53.615277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.085 ************************************ 00:09:36.085 START TEST iscsi_tgt_filesystem_ext4 00:09:36.085 ************************************ 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:09:36.085 mke2fs 1.46.5 (30-Dec-2021) 00:09:36.085 Discarding device blocks: 0/522240 done 00:09:36.085 Creating filesystem with 522240 4k blocks and 130560 inodes 00:09:36.085 Filesystem UUID: 0660b518-d91b-4ee2-ae60-512a8824f05c 00:09:36.085 Superblock backups stored on blocks: 00:09:36.085 32768, 98304, 163840, 229376, 294912 00:09:36.085 00:09:36.085 Allocating group tables: 0/16 done 00:09:36.085 Writing inode tables: 0/16 done 00:09:36.085 Creating journal (8192 blocks): done 00:09:36.085 Writing superblocks and filesystem accounting information: 0/16 done 00:09:36.085 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:36.085 17:13:54 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:36.343 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:36.343 fio-3.35 00:09:36.343 Starting 1 thread 00:09:36.343 job0: Laying out IO file (1 file / 1024MiB) 00:09:54.471 00:09:54.471 job0: (groupid=0, jobs=1): err= 0: pid=65271: Mon Jul 22 17:14:13 2024 00:09:54.471 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(1024MiB/17920msec); 0 zone resets 00:09:54.471 slat (usec): min=5, max=33562, avg=22.21, stdev=172.90 00:09:54.471 clat (usec): min=690, max=47368, avg=4350.72, stdev=2089.05 00:09:54.471 lat (usec): min=703, max=47380, avg=4372.93, stdev=2101.48 00:09:54.471 clat percentiles (usec): 00:09:54.471 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2769], 20.00th=[ 3097], 00:09:54.471 | 30.00th=[ 3621], 40.00th=[ 4015], 50.00th=[ 4293], 60.00th=[ 4555], 00:09:54.471 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5669], 95.00th=[ 6259], 00:09:54.471 | 99.00th=[ 7373], 99.50th=[10945], 99.90th=[32375], 99.95th=[43254], 00:09:54.471 | 99.99th=[45351] 00:09:54.471 bw ( KiB/s): min=45480, max=62576, per=100.00%, avg=58520.23, stdev=4409.33, samples=35 00:09:54.471 iops : min=11370, max=15644, avg=14630.06, stdev=1102.33, samples=35 00:09:54.471 lat (usec) : 750=0.01%, 1000=0.01% 00:09:54.471 lat (msec) : 2=0.13%, 4=39.40%, 10=59.93%, 20=0.13%, 50=0.40% 00:09:54.471 cpu : usr=5.40%, sys=20.63%, ctx=23231, majf=0, minf=1 00:09:54.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:54.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:54.471 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:54.471 00:09:54.471 Run status group 0 (all jobs): 00:09:54.471 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=1024MiB (1074MB), run=17920-17920msec 00:09:54.471 00:09:54.471 Disk stats (read/write): 00:09:54.471 sda: ios=0/260335, merge=0/2982, ticks=0/1024593, in_queue=1024593, util=99.47% 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:09:54.471 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:54.471 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:54.471 iscsiadm: No active sessions. 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:54.471 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:54.471 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:54.471 [2024-07-22 17:14:13.290523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:09:54.471 File existed. 00:09:54.471 17:14:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:09:54.730 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:54.730 fio-3.35 00:09:54.730 Starting 1 thread 00:10:16.684 00:10:16.684 job0: (groupid=0, jobs=1): err= 0: pid=65609: Mon Jul 22 17:14:33 2024 00:10:16.684 read: IOPS=15.6k, BW=61.0MiB/s (63.9MB/s)(1220MiB/20004msec) 00:10:16.684 slat (usec): min=2, max=3914, avg=10.46, stdev=51.49 00:10:16.684 clat (usec): min=621, max=42797, avg=4083.48, stdev=1379.65 00:10:16.684 lat (usec): min=685, max=44351, avg=4093.94, stdev=1390.13 00:10:16.684 clat percentiles (usec): 00:10:16.684 | 1.00th=[ 2147], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2933], 00:10:16.684 | 30.00th=[ 3359], 40.00th=[ 3720], 50.00th=[ 3982], 60.00th=[ 4293], 00:10:16.684 | 70.00th=[ 4686], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 5866], 00:10:16.684 | 99.00th=[ 6915], 99.50th=[ 8094], 99.90th=[17433], 99.95th=[28443], 00:10:16.684 | 99.99th=[37487] 00:10:16.684 bw ( KiB/s): min=28167, max=73952, per=100.00%, avg=62461.72, stdev=6223.22, samples=39 00:10:16.684 iops : min= 7041, max=18488, avg=15615.36, stdev=1555.93, samples=39 00:10:16.684 lat (usec) : 750=0.01%, 1000=0.01% 00:10:16.684 lat (msec) : 2=0.43%, 4=49.87%, 10=49.41%, 20=0.20%, 50=0.08% 00:10:16.684 cpu : usr=5.82%, sys=13.36%, ctx=28421, majf=0, minf=65 00:10:16.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:16.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:16.684 issued rwts: total=312254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:16.684 00:10:16.684 Run status group 0 (all jobs): 00:10:16.684 READ: bw=61.0MiB/s (63.9MB/s), 61.0MiB/s-61.0MiB/s (63.9MB/s-63.9MB/s), io=1220MiB (1279MB), run=20004-20004msec 00:10:16.684 00:10:16.684 Disk stats (read/write): 00:10:16.684 sda: ios=309592/5, merge=1462/2, ticks=1196664/7, in_queue=1196671, util=99.60% 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:16.684 00:10:16.684 real 0m38.997s 00:10:16.684 user 0m2.399s 00:10:16.684 sys 0m6.608s 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:16.684 ************************************ 00:10:16.684 END TEST iscsi_tgt_filesystem_ext4 00:10:16.684 ************************************ 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:16.684 ************************************ 00:10:16.684 START TEST iscsi_tgt_filesystem_btrfs 00:10:16.684 ************************************ 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:16.684 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:10:16.684 btrfs-progs v6.6.2 00:10:16.684 See https://btrfs.readthedocs.io for more information. 00:10:16.684 00:10:16.684 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:10:16.684 NOTE: several default settings have changed in version 5.15, please make sure 00:10:16.684 this does not affect your deployments: 00:10:16.684 - DUP for metadata (-m dup) 00:10:16.684 - enabled no-holes (-O no-holes) 00:10:16.684 - enabled free-space-tree (-R free-space-tree) 00:10:16.684 00:10:16.684 Label: (null) 00:10:16.684 UUID: 3f9fcce9-20c9-4567-8833-9f4e957fd97f 00:10:16.684 Node size: 16384 00:10:16.684 Sector size: 4096 00:10:16.684 Filesystem size: 1.99GiB 00:10:16.685 Block group profiles: 00:10:16.685 Data: single 8.00MiB 00:10:16.685 Metadata: DUP 102.00MiB 00:10:16.685 System: DUP 8.00MiB 00:10:16.685 SSD detected: yes 00:10:16.685 Zoned device: no 00:10:16.685 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:16.685 Runtime features: free-space-tree 00:10:16.685 Checksum: crc32c 00:10:16.685 Number of devices: 1 00:10:16.685 Devices: 00:10:16.685 ID SIZE PATH 00:10:16.685 1 1.99GiB /dev/sda1 00:10:16.685 00:10:16.685 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:16.685 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:10:16.685 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:10:16.685 17:14:33 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:10:16.685 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:16.685 fio-3.35 00:10:16.685 Starting 1 thread 00:10:16.685 job0: Laying out IO file (1 file / 1024MiB) 00:10:34.800 00:10:34.800 job0: (groupid=0, jobs=1): err= 0: pid=65876: Mon Jul 22 17:14:52 2024 00:10:34.800 write: IOPS=14.6k, BW=57.2MiB/s (59.9MB/s)(1024MiB/17917msec); 0 zone resets 00:10:34.800 slat (usec): min=8, max=4352, avg=39.78, stdev=77.44 00:10:34.800 clat (usec): min=1202, max=15351, avg=4332.30, stdev=1249.18 00:10:34.800 lat (usec): min=1381, max=15379, avg=4372.09, stdev=1255.85 00:10:34.800 clat percentiles (usec): 00:10:34.800 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2769], 20.00th=[ 3228], 00:10:34.800 | 30.00th=[ 3654], 40.00th=[ 4015], 50.00th=[ 4359], 60.00th=[ 4621], 00:10:34.800 | 70.00th=[ 4883], 80.00th=[ 5211], 90.00th=[ 5866], 95.00th=[ 6390], 00:10:34.800 | 99.00th=[ 7898], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11600], 00:10:34.800 | 99.99th=[13042] 00:10:34.800 bw ( KiB/s): min=49864, max=62144, per=100.00%, avg=58575.26, stdev=2761.33, samples=35 00:10:34.800 iops : min=12466, max=15536, avg=14643.80, stdev=690.34, samples=35 00:10:34.800 lat (msec) : 2=0.90%, 4=38.18%, 10=60.74%, 20=0.17% 00:10:34.800 cpu : usr=5.22%, sys=32.10%, ctx=46795, majf=0, minf=1 00:10:34.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:34.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:34.800 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.800 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:34.800 00:10:34.800 Run status group 0 (all jobs): 00:10:34.800 WRITE: bw=57.2MiB/s (59.9MB/s), 57.2MiB/s-57.2MiB/s (59.9MB/s-59.9MB/s), io=1024MiB (1074MB), run=17917-17917msec 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:34.800 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:34.800 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:34.800 iscsiadm: No active sessions. 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:34.800 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:34.800 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:34.800 [2024-07-22 17:14:52.284602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:34.800 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:34.801 File existed. 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:34.801 17:14:52 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:34.801 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:34.801 fio-3.35 00:10:34.801 Starting 1 thread 00:10:56.773 00:10:56.773 job0: (groupid=0, jobs=1): err= 0: pid=66145: Mon Jul 22 17:15:12 2024 00:10:56.773 read: IOPS=14.8k, BW=57.8MiB/s (60.6MB/s)(1155MiB/20004msec) 00:10:56.773 slat (usec): min=3, max=8377, avg=12.00, stdev=31.27 00:10:56.773 clat (usec): min=832, max=43684, avg=4309.93, stdev=1264.65 00:10:56.773 lat (usec): min=1036, max=44594, avg=4321.93, stdev=1271.95 00:10:56.773 clat percentiles (usec): 00:10:56.773 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 3228], 00:10:56.773 | 30.00th=[ 3621], 40.00th=[ 3949], 50.00th=[ 4293], 60.00th=[ 4555], 00:10:56.773 | 70.00th=[ 4948], 80.00th=[ 5276], 90.00th=[ 5800], 95.00th=[ 6128], 00:10:56.773 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[10945], 99.95th=[22676], 00:10:56.773 | 99.99th=[31327] 00:10:56.773 bw ( KiB/s): min=43856, max=64736, per=99.99%, avg=59137.03, stdev=3162.28, samples=39 00:10:56.773 iops : min=10964, max=16184, avg=14784.26, stdev=790.57, samples=39 00:10:56.773 lat (usec) : 1000=0.01% 00:10:56.773 lat (msec) : 2=0.10%, 4=41.19%, 10=58.59%, 20=0.06%, 50=0.06% 00:10:56.773 cpu : usr=4.99%, sys=16.34%, ctx=41828, majf=0, minf=65 00:10:56.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:56.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:56.773 issued rwts: total=295769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:56.773 00:10:56.773 Run status group 0 (all jobs): 00:10:56.773 READ: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=1155MiB (1211MB), run=20004-20004msec 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:56.773 00:10:56.773 real 0m38.977s 00:10:56.773 user 0m2.206s 00:10:56.773 sys 0m9.366s 00:10:56.773 ************************************ 00:10:56.773 END TEST iscsi_tgt_filesystem_btrfs 00:10:56.773 ************************************ 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.773 ************************************ 00:10:56.773 START TEST iscsi_tgt_filesystem_xfs 00:10:56.773 ************************************ 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:10:56.773 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:56.774 17:15:12 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:10:56.774 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:10:56.774 = sectsz=4096 attr=2, projid32bit=1 00:10:56.774 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:56.774 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:56.774 data = bsize=4096 blocks=522240, imaxpct=25 00:10:56.774 = sunit=0 swidth=0 blks 00:10:56.774 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:56.774 log =internal log bsize=4096 blocks=16384, version=2 00:10:56.774 = sectsz=4096 sunit=1 blks, lazy-count=1 00:10:56.774 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:56.774 Discarding blocks...Done. 00:10:56.774 17:15:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:56.774 17:15:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:10:56.774 17:15:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:10:56.774 17:15:13 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:10:56.774 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:56.774 fio-3.35 00:10:56.774 Starting 1 thread 00:10:56.774 job0: Laying out IO file (1 file / 1024MiB) 00:11:14.851 00:11:14.851 job0: (groupid=0, jobs=1): err= 0: pid=66408: Mon Jul 22 17:15:32 2024 00:11:14.851 write: IOPS=14.4k, BW=56.2MiB/s (58.9MB/s)(1024MiB/18226msec); 0 zone resets 00:11:14.851 slat (usec): min=3, max=7366, avg=22.97, stdev=136.65 00:11:14.851 clat (usec): min=1154, max=14603, avg=4425.26, stdev=1137.00 00:11:14.851 lat (usec): min=1167, max=16983, avg=4448.23, stdev=1144.80 00:11:14.851 clat percentiles (usec): 00:11:14.851 | 1.00th=[ 2376], 5.00th=[ 2606], 10.00th=[ 2933], 20.00th=[ 3326], 00:11:14.851 | 30.00th=[ 3818], 40.00th=[ 4178], 50.00th=[ 4424], 60.00th=[ 4686], 00:11:14.851 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 5866], 95.00th=[ 6390], 00:11:14.851 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 9110], 99.95th=[10028], 00:11:14.851 | 99.99th=[12387] 00:11:14.851 bw ( KiB/s): min=47200, max=62288, per=100.00%, avg=57537.58, stdev=3259.66, samples=36 00:11:14.851 iops : min=11800, max=15572, avg=14384.33, stdev=814.87, samples=36 00:11:14.851 lat (msec) : 2=0.01%, 4=34.12%, 10=65.82%, 20=0.05% 00:11:14.851 cpu : usr=4.54%, sys=10.81%, ctx=22892, majf=0, minf=1 00:11:14.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:14.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:14.851 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:14.851 00:11:14.851 Run status group 0 (all jobs): 00:11:14.851 WRITE: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=1024MiB (1074MB), run=18226-18226msec 00:11:14.851 00:11:14.851 Disk stats (read/write): 00:11:14.851 sda: ios=0/259706, merge=0/905, ticks=0/1026169, in_queue=1026169, util=99.53% 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:11:14.851 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:14.851 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:14.851 iscsiadm: No active sessions. 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:11:14.851 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:14.852 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:14.852 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:14.852 [2024-07-22 17:15:32.631201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1268 -- # i=1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1269 -- # sleep 0.1 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:11:14.852 File existed. 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:11:14.852 17:15:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:11:14.852 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:11:14.852 fio-3.35 00:11:14.852 Starting 1 thread 00:11:36.824 00:11:36.824 job0: (groupid=0, jobs=1): err= 0: pid=66652: Mon Jul 22 17:15:53 2024 00:11:36.824 read: IOPS=15.0k, BW=58.4MiB/s (61.3MB/s)(1169MiB/20004msec) 00:11:36.824 slat (usec): min=2, max=183, avg= 8.08, stdev= 7.95 00:11:36.824 clat (usec): min=1271, max=15586, avg=4270.09, stdev=1158.55 00:11:36.824 lat (usec): min=1287, max=15596, avg=4278.17, stdev=1158.21 00:11:36.824 clat percentiles (usec): 00:11:36.824 | 1.00th=[ 2311], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 3228], 00:11:36.824 | 30.00th=[ 3523], 40.00th=[ 3916], 50.00th=[ 4228], 60.00th=[ 4490], 00:11:36.824 | 70.00th=[ 4883], 80.00th=[ 5276], 90.00th=[ 5735], 95.00th=[ 6128], 00:11:36.824 | 99.00th=[ 7177], 99.50th=[ 7832], 99.90th=[10421], 99.95th=[11469], 00:11:36.824 | 99.99th=[13042] 00:11:36.824 bw ( KiB/s): min=47784, max=66256, per=100.00%, avg=59956.92, stdev=3791.34, samples=39 00:11:36.824 iops : min=11946, max=16564, avg=14989.23, stdev=947.84, samples=39 00:11:36.824 lat (msec) : 2=0.08%, 4=41.74%, 10=58.07%, 20=0.12% 00:11:36.824 cpu : usr=5.24%, sys=11.84%, ctx=25500, majf=0, minf=65 00:11:36.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:36.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:36.825 issued rwts: total=299146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:36.825 00:11:36.825 Run status group 0 (all jobs): 00:11:36.825 READ: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=1169MiB (1225MB), run=20004-20004msec 00:11:36.825 00:11:36.825 Disk stats (read/write): 00:11:36.825 sda: ios=296084/0, merge=1431/0, ticks=1226410/0, in_queue=1226411, util=99.61% 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:11:36.825 00:11:36.825 real 0m40.428s 00:11:36.825 user 0m2.147s 00:11:36.825 sys 0m4.613s 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 ************************************ 00:11:36.825 END TEST iscsi_tgt_filesystem_xfs 00:11:36.825 ************************************ 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:11:36.825 Cleaning up iSCSI connection 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:36.825 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:36.825 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:11:36.825 INFO: Removing lvol bdev 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 [2024-07-22 17:15:53.229970] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f57cc285-9bd5-44e4-a3ec-57136926127f) received event(SPDK_BDEV_EVENT_REMOVE) 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.825 INFO: Removing lvol stores 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.825 INFO: Removing NVMe 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 65115 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 65115 ']' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 65115 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65115 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.825 killing process with pid 65115 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65115' 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 65115 00:11:36.825 17:15:53 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 65115 00:11:36.825 17:15:55 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:11:36.825 17:15:55 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:36.825 00:11:36.825 real 2m5.543s 00:11:36.825 user 8m1.952s 00:11:36.825 sys 0m33.376s 00:11:36.825 17:15:55 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.825 17:15:55 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 ************************************ 00:11:36.825 END TEST iscsi_tgt_filesystem 00:11:36.825 ************************************ 00:11:36.825 17:15:55 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:36.825 17:15:55 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:36.825 17:15:55 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:36.825 17:15:55 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.825 17:15:55 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 ************************************ 00:11:36.825 START TEST chap_during_discovery 00:11:36.825 ************************************ 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:36.825 * Looking for test storage... 00:11:36.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=66970 00:11:36.825 iSCSI target launched. pid: 66970 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 66970' 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 66970 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 66970 ']' 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.825 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.826 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.826 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.826 17:15:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.826 [2024-07-22 17:15:55.761515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:36.826 [2024-07-22 17:15:55.761734] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66970 ] 00:11:37.393 [2024-07-22 17:15:56.082740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.393 [2024-07-22 17:15:56.306059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.651 17:15:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.587 iscsi_tgt is listening. Running tests... 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 Malloc0 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.587 17:15:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.524 configuring target for bideerctional authentication 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:39.524 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.525 executing discovery without adding credential to initiator - we expect failure 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:39.525 iscsiadm: Login failed to authenticate with target 00:11:39.525 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:11:39.525 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:11:39.525 configuring initiator for bideerctional authentication 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:39.525 iscsiadm: No matching sessions found 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:39.525 iscsiadm: No records found 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:39.525 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:39.785 17:15:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:43.070 17:16:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:43.070 17:16:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:11:43.637 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:11:43.896 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:11:43.896 17:16:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:47.178 17:16:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:47.178 17:16:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:11:47.745 executing discovery with adding credential to initiator 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:47.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:11:47.745 DONE 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:47.745 iscsiadm: No matching sessions found 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:47.745 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:48.004 17:16:06 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:51.294 17:16:09 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:51.295 17:16:09 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 66970 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 66970 ']' 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 66970 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:11:51.888 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66970 00:11:52.155 killing process with pid 66970 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66970' 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 66970 00:11:52.155 17:16:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 66970 00:11:54.689 17:16:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:11:54.689 17:16:13 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:54.689 00:11:54.689 real 0m17.671s 00:11:54.689 user 0m17.429s 00:11:54.689 sys 0m0.839s 00:11:54.689 17:16:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.689 ************************************ 00:11:54.689 17:16:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.689 END TEST chap_during_discovery 00:11:54.689 ************************************ 00:11:54.689 17:16:13 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:54.689 17:16:13 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:54.689 17:16:13 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.689 17:16:13 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.689 17:16:13 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:54.689 ************************************ 00:11:54.689 START TEST chap_mutual_auth 00:11:54.689 ************************************ 00:11:54.689 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:54.690 * Looking for test storage... 00:11:54.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=67269 00:11:54.690 iSCSI target launched. pid: 67269 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 67269' 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 67269 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 67269 ']' 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.690 17:16:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:54.690 [2024-07-22 17:16:13.478524] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:54.690 [2024-07-22 17:16:13.478733] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67269 ] 00:11:54.948 [2024-07-22 17:16:13.802737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.206 [2024-07-22 17:16:14.023874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.465 17:16:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 iscsi_tgt is listening. Running tests... 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 Malloc0 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.400 17:16:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:57.335 configuring target for authentication 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.335 executing discovery without adding credential to initiator - we expect failure 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:57.335 configuring initiator with biderectional authentication 00:11:57.335 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:57.336 iscsiadm: No matching sessions found 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:57.336 iscsiadm: No records found 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:57.336 17:16:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:00.619 17:16:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:00.620 17:16:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:12:01.592 17:16:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:04.877 17:16:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:04.877 17:16:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:12:05.812 executing discovery - target should not be discovered since the -m option was not used 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:05.812 [2024-07-22 17:16:24.475555] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:12:05.812 [2024-07-22 17:16:24.475621] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:12:05.812 iscsiadm: Login failed to authenticate with target 00:12:05.812 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:12:05.812 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:12:05.812 configuring target for authentication with the -m option 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:12:05.812 executing discovery: 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:05.812 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:12:05.812 executing login: 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:12:05.812 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:05.812 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:12:05.812 DONE 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:12:05.812 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:12:05.813 [2024-07-22 17:16:24.582275] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:05.813 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:05.813 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:12:05.813 17:16:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:09.095 17:16:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:09.095 17:16:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 67269 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 67269 ']' 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 67269 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67269 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:10.029 killing process with pid 67269 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67269' 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 67269 00:12:10.029 17:16:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 67269 00:12:12.559 17:16:31 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:12:12.559 17:16:31 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:12.559 00:12:12.559 real 0m18.044s 00:12:12.559 user 0m17.866s 00:12:12.559 sys 0m0.870s 00:12:12.559 17:16:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.559 ************************************ 00:12:12.559 END TEST chap_mutual_auth 00:12:12.559 17:16:31 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:12.559 ************************************ 00:12:12.559 17:16:31 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:12.559 17:16:31 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:12.559 17:16:31 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:12.560 17:16:31 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.560 17:16:31 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:12.560 ************************************ 00:12:12.560 START TEST iscsi_tgt_reset 00:12:12.560 ************************************ 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:12.560 * Looking for test storage... 00:12:12.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:12.560 Process pid: 67595 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=67595 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 67595' 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 67595 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 67595 ']' 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.560 17:16:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:12.818 [2024-07-22 17:16:31.524391] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:12.818 [2024-07-22 17:16:31.524571] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67595 ] 00:12:12.818 [2024-07-22 17:16:31.688804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.077 [2024-07-22 17:16:31.978879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.643 17:16:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 iscsi_tgt is listening. Running tests... 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 Malloc0 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.581 17:16:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:12:15.517 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:15.775 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:15.775 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:15.775 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:15.775 [2024-07-22 17:16:34.512788] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=67668 00:12:15.775 FIO pid: 67668 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 67668' 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:15.775 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:15.776 17:16:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:12:15.776 [global] 00:12:15.776 thread=1 00:12:15.776 invalidate=1 00:12:15.776 rw=read 00:12:15.776 time_based=1 00:12:15.776 runtime=60 00:12:15.776 ioengine=libaio 00:12:15.776 direct=1 00:12:15.776 bs=512 00:12:15.776 iodepth=1 00:12:15.776 norandommap=1 00:12:15.776 numjobs=1 00:12:15.776 00:12:15.776 [job0] 00:12:15.776 filename=/dev/sda 00:12:15.776 queue_depth set to 113 (sda) 00:12:15.776 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:15.776 fio-3.35 00:12:15.776 Starting 1 thread 00:12:16.711 17:16:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67595 00:12:16.711 17:16:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67668 00:12:16.712 17:16:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:16.712 [2024-07-22 17:16:35.535450] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:16.712 [2024-07-22 17:16:35.535574] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:16.712 17:16:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:16.712 [2024-07-22 17:16:35.537331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.648 17:16:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67595 00:12:17.648 17:16:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67668 00:12:17.648 17:16:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:17.648 17:16:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:19.029 17:16:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67595 00:12:19.029 17:16:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67668 00:12:19.029 17:16:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:19.029 [2024-07-22 17:16:37.545224] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:19.029 [2024-07-22 17:16:37.545338] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:19.029 17:16:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:19.029 [2024-07-22 17:16:37.547074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:19.964 17:16:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67595 00:12:19.964 17:16:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67668 00:12:19.964 17:16:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:19.964 17:16:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:20.899 17:16:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67595 00:12:20.899 17:16:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67668 00:12:20.899 17:16:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:20.899 [2024-07-22 17:16:39.556719] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:20.899 [2024-07-22 17:16:39.556837] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:20.899 17:16:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:20.899 [2024-07-22 17:16:39.558506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67595 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67668 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 67668 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 67668 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:21.835 Cleaning up iSCSI connection 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:21.835 fio: io_u error on file /dev/sda: No such device: read offset=27399168, buflen=512 00:12:21.835 fio: pid=67694, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:12:21.835 00:12:21.835 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=67694: Mon Jul 22 17:16:40 2024 00:12:21.835 read: IOPS=9316, BW=4658KiB/s (4770kB/s)(26.1MiB/5744msec) 00:12:21.835 slat (usec): min=5, max=1772, avg= 7.54, stdev=10.84 00:12:21.835 clat (usec): min=2, max=1649, avg=98.50, stdev=21.57 00:12:21.835 lat (usec): min=81, max=2088, avg=106.01, stdev=24.29 00:12:21.835 clat percentiles (usec): 00:12:21.835 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 88], 00:12:21.835 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 96], 00:12:21.835 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 119], 95.00th=[ 128], 00:12:21.835 | 99.00th=[ 151], 99.50th=[ 165], 99.90th=[ 302], 99.95th=[ 461], 00:12:21.835 | 99.99th=[ 791] 00:12:21.835 bw ( KiB/s): min= 4378, max= 4898, per=100.00%, avg=4661.00, stdev=136.89, samples=11 00:12:21.835 iops : min= 8756, max= 9796, avg=9322.00, stdev=273.79, samples=11 00:12:21.835 lat (usec) : 4=0.01%, 50=0.01%, 100=68.06%, 250=31.79%, 500=0.10% 00:12:21.835 lat (usec) : 750=0.02%, 1000=0.01% 00:12:21.835 lat (msec) : 2=0.01% 00:12:21.835 cpu : usr=4.25%, sys=8.78%, ctx=53580, majf=0, minf=1 00:12:21.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.835 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.835 issued rwts: total=53515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.835 00:12:21.835 Run status group 0 (all jobs): 00:12:21.835 READ: bw=4658KiB/s (4770kB/s), 4658KiB/s-4658KiB/s (4770kB/s-4770kB/s), io=26.1MiB (27.4MB), run=5744-5744msec 00:12:21.835 00:12:21.835 Disk stats (read/write): 00:12:21.835 sda: ios=52706/0, merge=0/0, ticks=5056/0, in_queue=5056, util=98.38% 00:12:21.835 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:21.835 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 67595 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 67595 ']' 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 67595 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67595 00:12:21.835 killing process with pid 67595 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67595' 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 67595 00:12:21.835 17:16:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 67595 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:24.366 00:12:24.366 real 0m11.796s 00:12:24.366 user 0m9.112s 00:12:24.366 sys 0m2.396s 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.366 ************************************ 00:12:24.366 END TEST iscsi_tgt_reset 00:12:24.366 ************************************ 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 17:16:43 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:24.366 17:16:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:24.366 17:16:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:24.366 17:16:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.366 17:16:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 ************************************ 00:12:24.366 START TEST iscsi_tgt_rpc_config 00:12:24.366 ************************************ 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:24.366 * Looking for test storage... 00:12:24.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=67856 00:12:24.366 Process pid: 67856 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 67856' 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 67856 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 67856 ']' 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.366 17:16:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:24.624 [2024-07-22 17:16:43.412162] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:24.624 [2024-07-22 17:16:43.412365] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67856 ] 00:12:24.882 [2024-07-22 17:16:43.590226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.140 [2024-07-22 17:16:43.897412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.398 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.398 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:12:25.398 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=67872 00:12:25.398 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:25.398 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:12:25.657 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 67872 00:12:25.657 PID TTY STAT TIME COMMAND 00:12:25.657 67872 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:25.657 17:16:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:27.037 17:16:45 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:12:27.973 iscsi_tgt is listening. Running tests... 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 67872 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 67872 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 67872 00:12:27.973 PID TTY STAT TIME COMMAND 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=67908 00:12:27.973 17:16:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 67908 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 67908 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 67908 00:12:28.908 PID TTY STAT TIME COMMAND 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:28.908 17:16:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:13:01.000 [2024-07-22 17:17:15.035387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:01.000 [2024-07-22 17:17:18.232249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:01.000 verify_log_flag_rpc_methods passed 00:13:01.000 create_malloc_bdevs_rpc_methods passed 00:13:01.000 verify_portal_groups_rpc_methods passed 00:13:01.000 verify_initiator_groups_rpc_method passed. 00:13:01.000 This issue will be fixed later. 00:13:01.000 verify_target_nodes_rpc_methods passed. 00:13:01.000 verify_scsi_devices_rpc_methods passed 00:13:01.000 verify_iscsi_connection_rpc_methods passed 00:13:01.000 17:17:19 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:01.258 [ 00:13:01.258 { 00:13:01.258 "name": "Malloc0", 00:13:01.258 "aliases": [ 00:13:01.258 "28873b43-ca20-4594-b918-d91746d0bdac" 00:13:01.258 ], 00:13:01.258 "product_name": "Malloc disk", 00:13:01.258 "block_size": 512, 00:13:01.258 "num_blocks": 131072, 00:13:01.258 "uuid": "28873b43-ca20-4594-b918-d91746d0bdac", 00:13:01.258 "assigned_rate_limits": { 00:13:01.258 "rw_ios_per_sec": 0, 00:13:01.258 "rw_mbytes_per_sec": 0, 00:13:01.258 "r_mbytes_per_sec": 0, 00:13:01.258 "w_mbytes_per_sec": 0 00:13:01.258 }, 00:13:01.258 "claimed": false, 00:13:01.258 "zoned": false, 00:13:01.258 "supported_io_types": { 00:13:01.258 "read": true, 00:13:01.258 "write": true, 00:13:01.258 "unmap": true, 00:13:01.258 "flush": true, 00:13:01.258 "reset": true, 00:13:01.258 "nvme_admin": false, 00:13:01.258 "nvme_io": false, 00:13:01.258 "nvme_io_md": false, 00:13:01.258 "write_zeroes": true, 00:13:01.258 "zcopy": true, 00:13:01.258 "get_zone_info": false, 00:13:01.258 "zone_management": false, 00:13:01.258 "zone_append": false, 00:13:01.258 "compare": false, 00:13:01.258 "compare_and_write": false, 00:13:01.258 "abort": true, 00:13:01.258 "seek_hole": false, 00:13:01.258 "seek_data": false, 00:13:01.258 "copy": true, 00:13:01.258 "nvme_iov_md": false 00:13:01.258 }, 00:13:01.258 "memory_domains": [ 00:13:01.258 { 00:13:01.258 "dma_device_id": "system", 00:13:01.258 "dma_device_type": 1 00:13:01.258 }, 00:13:01.258 { 00:13:01.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.258 "dma_device_type": 2 00:13:01.258 } 00:13:01.258 ], 00:13:01.258 "driver_specific": {} 00:13:01.258 }, 00:13:01.258 { 00:13:01.259 "name": "Malloc1", 00:13:01.259 "aliases": [ 00:13:01.259 "544b881c-677d-45ed-8b80-de906f24f7e6" 00:13:01.259 ], 00:13:01.259 "product_name": "Malloc disk", 00:13:01.259 "block_size": 512, 00:13:01.259 "num_blocks": 131072, 00:13:01.259 "uuid": "544b881c-677d-45ed-8b80-de906f24f7e6", 00:13:01.259 "assigned_rate_limits": { 00:13:01.259 "rw_ios_per_sec": 0, 00:13:01.259 "rw_mbytes_per_sec": 0, 00:13:01.259 "r_mbytes_per_sec": 0, 00:13:01.259 "w_mbytes_per_sec": 0 00:13:01.259 }, 00:13:01.259 "claimed": false, 00:13:01.259 "zoned": false, 00:13:01.259 "supported_io_types": { 00:13:01.259 "read": true, 00:13:01.259 "write": true, 00:13:01.259 "unmap": true, 00:13:01.259 "flush": true, 00:13:01.259 "reset": true, 00:13:01.259 "nvme_admin": false, 00:13:01.259 "nvme_io": false, 00:13:01.259 "nvme_io_md": false, 00:13:01.259 "write_zeroes": true, 00:13:01.259 "zcopy": true, 00:13:01.259 "get_zone_info": false, 00:13:01.259 "zone_management": false, 00:13:01.259 "zone_append": false, 00:13:01.259 "compare": false, 00:13:01.259 "compare_and_write": false, 00:13:01.259 "abort": true, 00:13:01.259 "seek_hole": false, 00:13:01.259 "seek_data": false, 00:13:01.259 "copy": true, 00:13:01.259 "nvme_iov_md": false 00:13:01.259 }, 00:13:01.259 "memory_domains": [ 00:13:01.259 { 00:13:01.259 "dma_device_id": "system", 00:13:01.259 "dma_device_type": 1 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.259 "dma_device_type": 2 00:13:01.259 } 00:13:01.259 ], 00:13:01.259 "driver_specific": {} 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "name": "Malloc2", 00:13:01.259 "aliases": [ 00:13:01.259 "f56f84a6-1486-436b-8c3a-f624b643c80e" 00:13:01.259 ], 00:13:01.259 "product_name": "Malloc disk", 00:13:01.259 "block_size": 512, 00:13:01.259 "num_blocks": 131072, 00:13:01.259 "uuid": "f56f84a6-1486-436b-8c3a-f624b643c80e", 00:13:01.259 "assigned_rate_limits": { 00:13:01.259 "rw_ios_per_sec": 0, 00:13:01.259 "rw_mbytes_per_sec": 0, 00:13:01.259 "r_mbytes_per_sec": 0, 00:13:01.259 "w_mbytes_per_sec": 0 00:13:01.259 }, 00:13:01.259 "claimed": false, 00:13:01.259 "zoned": false, 00:13:01.259 "supported_io_types": { 00:13:01.259 "read": true, 00:13:01.259 "write": true, 00:13:01.259 "unmap": true, 00:13:01.259 "flush": true, 00:13:01.259 "reset": true, 00:13:01.259 "nvme_admin": false, 00:13:01.259 "nvme_io": false, 00:13:01.259 "nvme_io_md": false, 00:13:01.259 "write_zeroes": true, 00:13:01.259 "zcopy": true, 00:13:01.259 "get_zone_info": false, 00:13:01.259 "zone_management": false, 00:13:01.259 "zone_append": false, 00:13:01.259 "compare": false, 00:13:01.259 "compare_and_write": false, 00:13:01.259 "abort": true, 00:13:01.259 "seek_hole": false, 00:13:01.259 "seek_data": false, 00:13:01.259 "copy": true, 00:13:01.259 "nvme_iov_md": false 00:13:01.259 }, 00:13:01.259 "memory_domains": [ 00:13:01.259 { 00:13:01.259 "dma_device_id": "system", 00:13:01.259 "dma_device_type": 1 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.259 "dma_device_type": 2 00:13:01.259 } 00:13:01.259 ], 00:13:01.259 "driver_specific": {} 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "name": "Malloc3", 00:13:01.259 "aliases": [ 00:13:01.259 "5dcd3cbf-a162-4d48-a87d-783711af1fef" 00:13:01.259 ], 00:13:01.259 "product_name": "Malloc disk", 00:13:01.259 "block_size": 512, 00:13:01.259 "num_blocks": 131072, 00:13:01.259 "uuid": "5dcd3cbf-a162-4d48-a87d-783711af1fef", 00:13:01.259 "assigned_rate_limits": { 00:13:01.259 "rw_ios_per_sec": 0, 00:13:01.259 "rw_mbytes_per_sec": 0, 00:13:01.259 "r_mbytes_per_sec": 0, 00:13:01.259 "w_mbytes_per_sec": 0 00:13:01.259 }, 00:13:01.259 "claimed": false, 00:13:01.259 "zoned": false, 00:13:01.259 "supported_io_types": { 00:13:01.259 "read": true, 00:13:01.259 "write": true, 00:13:01.259 "unmap": true, 00:13:01.259 "flush": true, 00:13:01.259 "reset": true, 00:13:01.259 "nvme_admin": false, 00:13:01.259 "nvme_io": false, 00:13:01.259 "nvme_io_md": false, 00:13:01.259 "write_zeroes": true, 00:13:01.259 "zcopy": true, 00:13:01.259 "get_zone_info": false, 00:13:01.259 "zone_management": false, 00:13:01.259 "zone_append": false, 00:13:01.259 "compare": false, 00:13:01.259 "compare_and_write": false, 00:13:01.259 "abort": true, 00:13:01.259 "seek_hole": false, 00:13:01.259 "seek_data": false, 00:13:01.259 "copy": true, 00:13:01.259 "nvme_iov_md": false 00:13:01.259 }, 00:13:01.259 "memory_domains": [ 00:13:01.259 { 00:13:01.259 "dma_device_id": "system", 00:13:01.259 "dma_device_type": 1 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.259 "dma_device_type": 2 00:13:01.259 } 00:13:01.259 ], 00:13:01.259 "driver_specific": {} 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "name": "Malloc4", 00:13:01.259 "aliases": [ 00:13:01.259 "05405ba8-1307-4697-99a3-4624cc0d4bec" 00:13:01.259 ], 00:13:01.259 "product_name": "Malloc disk", 00:13:01.259 "block_size": 512, 00:13:01.259 "num_blocks": 131072, 00:13:01.259 "uuid": "05405ba8-1307-4697-99a3-4624cc0d4bec", 00:13:01.259 "assigned_rate_limits": { 00:13:01.259 "rw_ios_per_sec": 0, 00:13:01.259 "rw_mbytes_per_sec": 0, 00:13:01.259 "r_mbytes_per_sec": 0, 00:13:01.259 "w_mbytes_per_sec": 0 00:13:01.259 }, 00:13:01.259 "claimed": false, 00:13:01.259 "zoned": false, 00:13:01.259 "supported_io_types": { 00:13:01.259 "read": true, 00:13:01.259 "write": true, 00:13:01.259 "unmap": true, 00:13:01.259 "flush": true, 00:13:01.259 "reset": true, 00:13:01.259 "nvme_admin": false, 00:13:01.259 "nvme_io": false, 00:13:01.259 "nvme_io_md": false, 00:13:01.259 "write_zeroes": true, 00:13:01.259 "zcopy": true, 00:13:01.259 "get_zone_info": false, 00:13:01.259 "zone_management": false, 00:13:01.259 "zone_append": false, 00:13:01.259 "compare": false, 00:13:01.259 "compare_and_write": false, 00:13:01.259 "abort": true, 00:13:01.259 "seek_hole": false, 00:13:01.259 "seek_data": false, 00:13:01.259 "copy": true, 00:13:01.259 "nvme_iov_md": false 00:13:01.259 }, 00:13:01.259 "memory_domains": [ 00:13:01.259 { 00:13:01.259 "dma_device_id": "system", 00:13:01.259 "dma_device_type": 1 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.259 "dma_device_type": 2 00:13:01.259 } 00:13:01.259 ], 00:13:01.259 "driver_specific": {} 00:13:01.259 }, 00:13:01.259 { 00:13:01.259 "name": "Malloc5", 00:13:01.259 "aliases": [ 00:13:01.259 "dcae4987-4bc2-473d-b336-01a7c35842fd" 00:13:01.259 ], 00:13:01.259 "product_name": "Malloc disk", 00:13:01.259 "block_size": 512, 00:13:01.259 "num_blocks": 131072, 00:13:01.259 "uuid": "dcae4987-4bc2-473d-b336-01a7c35842fd", 00:13:01.259 "assigned_rate_limits": { 00:13:01.259 "rw_ios_per_sec": 0, 00:13:01.259 "rw_mbytes_per_sec": 0, 00:13:01.259 "r_mbytes_per_sec": 0, 00:13:01.259 "w_mbytes_per_sec": 0 00:13:01.259 }, 00:13:01.259 "claimed": false, 00:13:01.259 "zoned": false, 00:13:01.259 "supported_io_types": { 00:13:01.259 "read": true, 00:13:01.259 "write": true, 00:13:01.259 "unmap": true, 00:13:01.259 "flush": true, 00:13:01.259 "reset": true, 00:13:01.259 "nvme_admin": false, 00:13:01.259 "nvme_io": false, 00:13:01.259 "nvme_io_md": false, 00:13:01.259 "write_zeroes": true, 00:13:01.259 "zcopy": true, 00:13:01.259 "get_zone_info": false, 00:13:01.259 "zone_management": false, 00:13:01.259 "zone_append": false, 00:13:01.259 "compare": false, 00:13:01.259 "compare_and_write": false, 00:13:01.259 "abort": true, 00:13:01.259 "seek_hole": false, 00:13:01.259 "seek_data": false, 00:13:01.259 "copy": true, 00:13:01.259 "nvme_iov_md": false 00:13:01.260 }, 00:13:01.260 "memory_domains": [ 00:13:01.260 { 00:13:01.260 "dma_device_id": "system", 00:13:01.260 "dma_device_type": 1 00:13:01.260 }, 00:13:01.260 { 00:13:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.260 "dma_device_type": 2 00:13:01.260 } 00:13:01.260 ], 00:13:01.260 "driver_specific": {} 00:13:01.260 } 00:13:01.260 ] 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:13:01.260 Cleaning up iSCSI connection 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:13:01.260 iscsiadm: No matching sessions found 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:13:01.260 iscsiadm: No records found 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 67856 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 67856 ']' 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 67856 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67856 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.260 killing process with pid 67856 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67856' 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 67856 00:13:01.260 17:17:20 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 67856 00:13:04.543 17:17:23 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:13:04.543 17:17:23 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:04.543 ************************************ 00:13:04.543 END TEST iscsi_tgt_rpc_config 00:13:04.543 ************************************ 00:13:04.543 00:13:04.543 real 0m40.293s 00:13:04.543 user 1m7.344s 00:13:04.543 sys 0m5.058s 00:13:04.543 17:17:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.543 17:17:23 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:13:04.802 17:17:23 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:13:04.802 17:17:23 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:13:04.802 17:17:23 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:04.802 17:17:23 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.802 17:17:23 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:04.802 ************************************ 00:13:04.802 START TEST iscsi_tgt_iscsi_lvol 00:13:04.802 ************************************ 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:13:04.802 * Looking for test storage... 00:13:04.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=68524 00:13:04.802 Process pid: 68524 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 68524' 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 68524 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 68524 ']' 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.802 17:17:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:05.060 [2024-07-22 17:17:23.754613] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:05.061 [2024-07-22 17:17:23.754826] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68524 ] 00:13:05.061 [2024-07-22 17:17:23.932409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.319 [2024-07-22 17:17:24.197039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.319 [2024-07-22 17:17:24.197178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.319 [2024-07-22 17:17:24.198449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.319 [2024-07-22 17:17:24.198463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.885 17:17:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.885 17:17:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:05.885 17:17:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:13:06.143 17:17:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:07.075 17:17:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:07.075 iscsi_tgt is listening. Running tests... 00:13:07.075 17:17:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:13:07.075 17:17:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:07.075 17:17:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.075 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:13:07.075 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.075 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.075 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:13:07.332 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:13:07.332 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:07.332 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:13:07.332 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:13:07.652 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:13:07.652 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:08.245 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:13:08.245 17:17:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:08.503 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:13:08.503 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:08.760 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:13:08.760 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:13:09.017 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=5c70d858-fb34-4942-b744-40cfb2ace95a 00:13:09.017 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:09.017 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:09.017 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.017 17:17:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_1 10 00:13:09.274 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=95698f9e-c51a-40d2-bb91-63e22b51c72e 00:13:09.274 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='95698f9e-c51a-40d2-bb91-63e22b51c72e:0 ' 00:13:09.274 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.274 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_2 10 00:13:09.530 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=094d8465-477b-4a07-9633-b988118647fd 00:13:09.530 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='094d8465-477b-4a07-9633-b988118647fd:1 ' 00:13:09.530 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.530 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_3 10 00:13:09.788 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=15a72d0b-40b5-4603-9866-c2f3766e71e1 00:13:09.788 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='15a72d0b-40b5-4603-9866-c2f3766e71e1:2 ' 00:13:09.788 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.788 17:17:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_4 10 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1a31e541-4991-46fa-99f9-696153e50837 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1a31e541-4991-46fa-99f9-696153e50837:3 ' 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_5 10 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ad43d395-e9b7-4650-ad8a-388dfbcc0047 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ad43d395-e9b7-4650-ad8a-388dfbcc0047:4 ' 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.353 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_6 10 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ef0fd059-eace-4e75-ab51-d6cf5f41ecff 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ef0fd059-eace-4e75-ab51-d6cf5f41ecff:5 ' 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_7 10 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c94e7ae7-d4f8-4277-a326-cbc58a891015 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c94e7ae7-d4f8-4277-a326-cbc58a891015:6 ' 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.917 17:17:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_8 10 00:13:11.175 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e880e03d-6e31-4def-8de2-1023b9defc1e 00:13:11.175 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e880e03d-6e31-4def-8de2-1023b9defc1e:7 ' 00:13:11.175 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:11.175 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_9 10 00:13:11.434 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=41f87e42-918a-4e3f-8cb7-b7db4591940d 00:13:11.434 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='41f87e42-918a-4e3f-8cb7-b7db4591940d:8 ' 00:13:11.434 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:11.434 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d858-fb34-4942-b744-40cfb2ace95a lbd_10 10 00:13:11.692 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=050ed329-08b7-45ce-9fd5-c183f190f3b6 00:13:11.692 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='050ed329-08b7-45ce-9fd5-c183f190f3b6:9 ' 00:13:11.692 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias '95698f9e-c51a-40d2-bb91-63e22b51c72e:0 094d8465-477b-4a07-9633-b988118647fd:1 15a72d0b-40b5-4603-9866-c2f3766e71e1:2 1a31e541-4991-46fa-99f9-696153e50837:3 ad43d395-e9b7-4650-ad8a-388dfbcc0047:4 ef0fd059-eace-4e75-ab51-d6cf5f41ecff:5 c94e7ae7-d4f8-4277-a326-cbc58a891015:6 e880e03d-6e31-4def-8de2-1023b9defc1e:7 41f87e42-918a-4e3f-8cb7-b7db4591940d:8 050ed329-08b7-45ce-9fd5-c183f190f3b6:9 ' 1:3 256 -d 00:13:12.259 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:12.259 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:13:12.259 17:17:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:13:12.259 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:13:12.259 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=d3ea0111-fc22-4cb8-84d3-591305102551 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.827 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_1 10 00:13:13.085 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5773fe84-ecfa-43e3-bc3f-e19d8e1f0b3d 00:13:13.085 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5773fe84-ecfa-43e3-bc3f-e19d8e1f0b3d:0 ' 00:13:13.085 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.085 17:17:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_2 10 00:13:13.344 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bd28e281-4878-4437-94fb-62f3d36c0d62 00:13:13.344 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bd28e281-4878-4437-94fb-62f3d36c0d62:1 ' 00:13:13.344 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.344 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_3 10 00:13:13.602 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0f56ee2a-3fd4-4599-85da-c8ea295bbcef 00:13:13.603 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0f56ee2a-3fd4-4599-85da-c8ea295bbcef:2 ' 00:13:13.603 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.603 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_4 10 00:13:13.903 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=18093084-6f69-4f00-9ec7-3bc9f82358b3 00:13:13.903 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='18093084-6f69-4f00-9ec7-3bc9f82358b3:3 ' 00:13:13.903 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.903 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_5 10 00:13:14.162 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d827dacb-272d-4f40-ae72-f0a073ca124f 00:13:14.162 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d827dacb-272d-4f40-ae72-f0a073ca124f:4 ' 00:13:14.162 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:14.162 17:17:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_6 10 00:13:14.421 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=920a68b0-22d2-4740-949d-6de557cadb66 00:13:14.421 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='920a68b0-22d2-4740-949d-6de557cadb66:5 ' 00:13:14.421 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:14.421 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_7 10 00:13:14.680 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0de442fd-86be-441b-8032-dc3815ba510c 00:13:14.680 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0de442fd-86be-441b-8032-dc3815ba510c:6 ' 00:13:14.680 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:14.680 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_8 10 00:13:14.939 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e56a60b6-425d-4ff5-bdaa-706b64d747d1 00:13:14.939 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e56a60b6-425d-4ff5-bdaa-706b64d747d1:7 ' 00:13:14.939 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:14.939 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_9 10 00:13:15.198 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fa5391c1-1bca-4e45-a4a7-dc1212d95def 00:13:15.198 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fa5391c1-1bca-4e45-a4a7-dc1212d95def:8 ' 00:13:15.198 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:15.198 17:17:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d3ea0111-fc22-4cb8-84d3-591305102551 lbd_10 10 00:13:15.457 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=38678368-53dc-4080-b2e3-fec964508169 00:13:15.457 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='38678368-53dc-4080-b2e3-fec964508169:9 ' 00:13:15.457 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias '5773fe84-ecfa-43e3-bc3f-e19d8e1f0b3d:0 bd28e281-4878-4437-94fb-62f3d36c0d62:1 0f56ee2a-3fd4-4599-85da-c8ea295bbcef:2 18093084-6f69-4f00-9ec7-3bc9f82358b3:3 d827dacb-272d-4f40-ae72-f0a073ca124f:4 920a68b0-22d2-4740-949d-6de557cadb66:5 0de442fd-86be-441b-8032-dc3815ba510c:6 e56a60b6-425d-4ff5-bdaa-706b64d747d1:7 fa5391c1-1bca-4e45-a4a7-dc1212d95def:8 38678368-53dc-4080-b2e3-fec964508169:9 ' 1:4 256 -d 00:13:15.716 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:15.716 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:13:15.716 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:13:15.975 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:13:15.975 17:17:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:16.234 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:13:16.234 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:13:16.492 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=442b44d6-0ac1-49fa-9832-90d9774fb985 00:13:16.492 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:16.492 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:16.492 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.492 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_1 10 00:13:16.752 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1f15412f-c8e2-4359-9cec-74276fc69b95 00:13:16.752 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1f15412f-c8e2-4359-9cec-74276fc69b95:0 ' 00:13:16.752 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:16.752 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_2 10 00:13:17.011 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=808d5a5e-6935-4d81-9cd3-d914d3cf5fbb 00:13:17.011 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='808d5a5e-6935-4d81-9cd3-d914d3cf5fbb:1 ' 00:13:17.011 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:17.011 17:17:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_3 10 00:13:17.269 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1f0f5f24-943d-4cf6-ab91-d56f5a50288f 00:13:17.269 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1f0f5f24-943d-4cf6-ab91-d56f5a50288f:2 ' 00:13:17.269 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:17.269 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_4 10 00:13:17.527 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ce51d5f1-3ab3-4bfe-b937-8028213c0799 00:13:17.527 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ce51d5f1-3ab3-4bfe-b937-8028213c0799:3 ' 00:13:17.527 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:17.527 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_5 10 00:13:17.786 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e1017341-2cb0-4d5c-ae0c-18036fa733d1 00:13:17.786 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e1017341-2cb0-4d5c-ae0c-18036fa733d1:4 ' 00:13:17.786 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:17.786 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_6 10 00:13:18.044 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f7e7a10f-8238-4d2f-97c7-5fd0576b9ee8 00:13:18.044 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f7e7a10f-8238-4d2f-97c7-5fd0576b9ee8:5 ' 00:13:18.044 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.044 17:17:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_7 10 00:13:18.303 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f1bb6632-3f54-4b11-8dd8-5703451732ef 00:13:18.303 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f1bb6632-3f54-4b11-8dd8-5703451732ef:6 ' 00:13:18.303 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.303 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_8 10 00:13:18.562 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=17724517-31fa-4ebf-8cfd-2c58a96b5f19 00:13:18.562 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='17724517-31fa-4ebf-8cfd-2c58a96b5f19:7 ' 00:13:18.562 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.562 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_9 10 00:13:18.820 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9e358dba-1c3d-4e79-9141-dfae681faf01 00:13:18.820 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9e358dba-1c3d-4e79-9141-dfae681faf01:8 ' 00:13:18.820 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:18.820 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 442b44d6-0ac1-49fa-9832-90d9774fb985 lbd_10 10 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=445c6fad-5876-4721-8894-6530900db39b 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='445c6fad-5876-4721-8894-6530900db39b:9 ' 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '1f15412f-c8e2-4359-9cec-74276fc69b95:0 808d5a5e-6935-4d81-9cd3-d914d3cf5fbb:1 1f0f5f24-943d-4cf6-ab91-d56f5a50288f:2 ce51d5f1-3ab3-4bfe-b937-8028213c0799:3 e1017341-2cb0-4d5c-ae0c-18036fa733d1:4 f7e7a10f-8238-4d2f-97c7-5fd0576b9ee8:5 f1bb6632-3f54-4b11-8dd8-5703451732ef:6 17724517-31fa-4ebf-8cfd-2c58a96b5f19:7 9e358dba-1c3d-4e79-9141-dfae681faf01:8 445c6fad-5876-4721-8894-6530900db39b:9 ' 1:5 256 -d 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:13:19.078 17:17:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:13:19.337 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:13:19.337 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:19.944 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:13:19.944 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:13:20.203 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 00:13:20.204 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:20.204 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:20.204 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.204 17:17:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_1 10 00:13:20.461 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3e9c34af-82db-47ec-9af7-431528f7fba9 00:13:20.461 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3e9c34af-82db-47ec-9af7-431528f7fba9:0 ' 00:13:20.461 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.461 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_2 10 00:13:20.719 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5c78f3fe-1d17-4500-8542-22782ef0c0a9 00:13:20.719 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5c78f3fe-1d17-4500-8542-22782ef0c0a9:1 ' 00:13:20.719 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.719 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_3 10 00:13:20.977 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1eda6586-3441-41a1-8a54-24e06e3e9c7c 00:13:20.977 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1eda6586-3441-41a1-8a54-24e06e3e9c7c:2 ' 00:13:20.977 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:20.977 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_4 10 00:13:21.236 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0d01d334-29b1-493a-a131-9682f61ecc78 00:13:21.236 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0d01d334-29b1-493a-a131-9682f61ecc78:3 ' 00:13:21.236 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:21.236 17:17:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_5 10 00:13:21.236 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1a24c122-d477-4e90-84eb-c9bbcc4551f9 00:13:21.236 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1a24c122-d477-4e90-84eb-c9bbcc4551f9:4 ' 00:13:21.236 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:21.236 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_6 10 00:13:21.494 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=decc16cf-a556-492d-a367-993962015766 00:13:21.494 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='decc16cf-a556-492d-a367-993962015766:5 ' 00:13:21.494 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:21.494 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_7 10 00:13:21.752 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c014b000-d97d-4a2d-b32a-25e4cd51460b 00:13:21.752 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c014b000-d97d-4a2d-b32a-25e4cd51460b:6 ' 00:13:21.752 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:21.752 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_8 10 00:13:22.318 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=41e8bdf6-1342-4bf6-853b-71aa99b491cb 00:13:22.318 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='41e8bdf6-1342-4bf6-853b-71aa99b491cb:7 ' 00:13:22.318 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.318 17:17:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_9 10 00:13:22.318 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4df57595-1ccd-4952-894e-60600db53f1c 00:13:22.318 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4df57595-1ccd-4952-894e-60600db53f1c:8 ' 00:13:22.319 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:22.319 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b7ed3fa-cab0-462c-bc50-49d2d291b7e9 lbd_10 10 00:13:22.577 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=eb643634-f073-4918-a159-51c7830be7c4 00:13:22.577 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='eb643634-f073-4918-a159-51c7830be7c4:9 ' 00:13:22.577 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias '3e9c34af-82db-47ec-9af7-431528f7fba9:0 5c78f3fe-1d17-4500-8542-22782ef0c0a9:1 1eda6586-3441-41a1-8a54-24e06e3e9c7c:2 0d01d334-29b1-493a-a131-9682f61ecc78:3 1a24c122-d477-4e90-84eb-c9bbcc4551f9:4 decc16cf-a556-492d-a367-993962015766:5 c014b000-d97d-4a2d-b32a-25e4cd51460b:6 41e8bdf6-1342-4bf6-853b-71aa99b491cb:7 4df57595-1ccd-4952-894e-60600db53f1c:8 eb643634-f073-4918-a159-51c7830be7c4:9 ' 1:6 256 -d 00:13:22.835 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:22.835 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:13:22.835 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:13:23.093 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:13:23.093 17:17:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:23.351 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:13:23.351 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:13:23.608 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=3c753a23-9cb8-4a8a-971f-a9398d3c46c4 00:13:23.608 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:23.608 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_1 10 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6fdf74d-e8e9-4f52-97dc-81855ada40b9 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6fdf74d-e8e9-4f52-97dc-81855ada40b9:0 ' 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:23.866 17:17:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_2 10 00:13:24.124 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1a03e69c-07e2-4553-97bb-c32b8bd9a00f 00:13:24.124 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1a03e69c-07e2-4553-97bb-c32b8bd9a00f:1 ' 00:13:24.124 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:24.124 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_3 10 00:13:24.382 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=72fffe33-dbeb-4a56-afa3-4938c771561f 00:13:24.382 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='72fffe33-dbeb-4a56-afa3-4938c771561f:2 ' 00:13:24.382 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:24.382 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_4 10 00:13:24.639 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c67c258d-b5be-473b-b2d5-10468407f0fa 00:13:24.639 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c67c258d-b5be-473b-b2d5-10468407f0fa:3 ' 00:13:24.639 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:24.639 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_5 10 00:13:24.897 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6b9c7d23-68e6-42e9-951b-2be6b2c71ed8 00:13:24.897 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6b9c7d23-68e6-42e9-951b-2be6b2c71ed8:4 ' 00:13:24.897 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:24.897 17:17:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_6 10 00:13:25.155 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=46a7a092-a92e-450e-9b1d-2cf38551bffa 00:13:25.155 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='46a7a092-a92e-450e-9b1d-2cf38551bffa:5 ' 00:13:25.155 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:25.155 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_7 10 00:13:25.412 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0200ab08-92ae-4f7d-9f83-08935a833f28 00:13:25.412 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0200ab08-92ae-4f7d-9f83-08935a833f28:6 ' 00:13:25.412 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:25.412 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_8 10 00:13:25.670 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=36d937b9-1c44-4a91-a011-d05c0071b640 00:13:25.670 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='36d937b9-1c44-4a91-a011-d05c0071b640:7 ' 00:13:25.670 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:25.670 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_9 10 00:13:25.928 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e8b7635e-bcc2-42cd-9279-9cd2ddae7bd9 00:13:25.928 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e8b7635e-bcc2-42cd-9279-9cd2ddae7bd9:8 ' 00:13:25.928 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:25.928 17:17:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c753a23-9cb8-4a8a-971f-a9398d3c46c4 lbd_10 10 00:13:26.186 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a0cb88d1-6286-4938-b0a3-f9f639895795 00:13:26.186 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a0cb88d1-6286-4938-b0a3-f9f639895795:9 ' 00:13:26.186 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias 'b6fdf74d-e8e9-4f52-97dc-81855ada40b9:0 1a03e69c-07e2-4553-97bb-c32b8bd9a00f:1 72fffe33-dbeb-4a56-afa3-4938c771561f:2 c67c258d-b5be-473b-b2d5-10468407f0fa:3 6b9c7d23-68e6-42e9-951b-2be6b2c71ed8:4 46a7a092-a92e-450e-9b1d-2cf38551bffa:5 0200ab08-92ae-4f7d-9f83-08935a833f28:6 36d937b9-1c44-4a91-a011-d05c0071b640:7 e8b7635e-bcc2-42cd-9279-9cd2ddae7bd9:8 a0cb88d1-6286-4938-b0a3-f9f639895795:9 ' 1:7 256 -d 00:13:26.446 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:26.446 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:13:26.446 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:13:26.705 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:13:26.705 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:27.272 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:13:27.272 17:17:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=e720b82e-9238-4c16-95d8-fb28b1398a44 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_1 10 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d0c8d00d-9a1d-4c70-88ff-da2616539ee3 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d0c8d00d-9a1d-4c70-88ff-da2616539ee3:0 ' 00:13:27.530 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:27.787 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_2 10 00:13:27.787 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1492453b-7988-4656-85d7-37d12f80f107 00:13:27.787 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1492453b-7988-4656-85d7-37d12f80f107:1 ' 00:13:27.787 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:28.045 17:17:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_3 10 00:13:28.304 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a8b8381c-30b5-4717-8872-f397f53a76a4 00:13:28.304 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a8b8381c-30b5-4717-8872-f397f53a76a4:2 ' 00:13:28.304 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:28.304 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_4 10 00:13:28.563 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e5e68e44-f1fe-459b-840a-b88d8ea6d76b 00:13:28.563 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e5e68e44-f1fe-459b-840a-b88d8ea6d76b:3 ' 00:13:28.563 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:28.563 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_5 10 00:13:28.822 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fd6b4ee0-b852-4b50-b0c0-dc7e81be200d 00:13:28.822 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fd6b4ee0-b852-4b50-b0c0-dc7e81be200d:4 ' 00:13:28.822 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:28.822 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_6 10 00:13:29.080 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dec698f7-59a8-44f7-979d-a1607c5d7eaf 00:13:29.080 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dec698f7-59a8-44f7-979d-a1607c5d7eaf:5 ' 00:13:29.080 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.080 17:17:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_7 10 00:13:29.401 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=99a49890-23b5-4d76-9e1e-f795099c4776 00:13:29.401 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='99a49890-23b5-4d76-9e1e-f795099c4776:6 ' 00:13:29.401 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.401 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_8 10 00:13:29.660 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6ea4e416-5f46-42d6-b55b-3c6d24057687 00:13:29.660 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6ea4e416-5f46-42d6-b55b-3c6d24057687:7 ' 00:13:29.660 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.660 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_9 10 00:13:29.919 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f8b9b70d-1dd4-4b7a-a6bc-21e830c3bda5 00:13:29.919 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f8b9b70d-1dd4-4b7a-a6bc-21e830c3bda5:8 ' 00:13:29.919 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:29.919 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e720b82e-9238-4c16-95d8-fb28b1398a44 lbd_10 10 00:13:30.176 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4f11613a-83fb-4b81-a27c-460acfe16eb9 00:13:30.176 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4f11613a-83fb-4b81-a27c-460acfe16eb9:9 ' 00:13:30.176 17:17:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias 'd0c8d00d-9a1d-4c70-88ff-da2616539ee3:0 1492453b-7988-4656-85d7-37d12f80f107:1 a8b8381c-30b5-4717-8872-f397f53a76a4:2 e5e68e44-f1fe-459b-840a-b88d8ea6d76b:3 fd6b4ee0-b852-4b50-b0c0-dc7e81be200d:4 dec698f7-59a8-44f7-979d-a1607c5d7eaf:5 99a49890-23b5-4d76-9e1e-f795099c4776:6 6ea4e416-5f46-42d6-b55b-3c6d24057687:7 f8b9b70d-1dd4-4b7a-a6bc-21e830c3bda5:8 4f11613a-83fb-4b81-a27c-460acfe16eb9:9 ' 1:8 256 -d 00:13:30.433 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:30.433 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:13:30.433 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:13:30.692 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:13:30.692 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:30.950 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:13:30.950 17:17:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:13:31.208 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=f3fa6f03-68c5-43ac-a035-a8b8b1d89dca 00:13:31.208 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:31.208 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:31.208 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.208 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_1 10 00:13:31.467 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c7e88662-c1e4-4845-836e-2746064d6f97 00:13:31.467 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c7e88662-c1e4-4845-836e-2746064d6f97:0 ' 00:13:31.467 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.467 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_2 10 00:13:31.743 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=75d72902-d92e-4122-a7c1-69c1f6873b11 00:13:31.743 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='75d72902-d92e-4122-a7c1-69c1f6873b11:1 ' 00:13:31.743 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:31.743 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_3 10 00:13:32.010 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=595dd6c9-8b07-4420-a9b9-06af0bbf74bc 00:13:32.010 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='595dd6c9-8b07-4420-a9b9-06af0bbf74bc:2 ' 00:13:32.010 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:32.010 17:17:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_4 10 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=08c1d532-2fb8-4832-973a-0fbaefd2663e 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='08c1d532-2fb8-4832-973a-0fbaefd2663e:3 ' 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_5 10 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7c596c8e-1f89-402b-a12a-01c98e1beba8 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7c596c8e-1f89-402b-a12a-01c98e1beba8:4 ' 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:32.575 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_6 10 00:13:32.834 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b752af4a-a7df-4f0b-a0e7-ebdf0515f786 00:13:32.834 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b752af4a-a7df-4f0b-a0e7-ebdf0515f786:5 ' 00:13:32.834 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:32.834 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_7 10 00:13:33.092 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4eff2ba0-79ee-4f85-9c30-dffb8f00eee8 00:13:33.092 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4eff2ba0-79ee-4f85-9c30-dffb8f00eee8:6 ' 00:13:33.092 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.092 17:17:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_8 10 00:13:33.350 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9446197a-7c8d-4dfe-82d3-4137cc0d0e87 00:13:33.350 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9446197a-7c8d-4dfe-82d3-4137cc0d0e87:7 ' 00:13:33.350 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.350 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_9 10 00:13:33.609 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=61e10208-6947-45a0-a50a-8e2c908e6d7c 00:13:33.609 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='61e10208-6947-45a0-a50a-8e2c908e6d7c:8 ' 00:13:33.610 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:33.610 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3fa6f03-68c5-43ac-a035-a8b8b1d89dca lbd_10 10 00:13:33.868 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=80332bcf-90a1-4198-bef3-24c4182af9b9 00:13:33.868 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='80332bcf-90a1-4198-bef3-24c4182af9b9:9 ' 00:13:33.869 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias 'c7e88662-c1e4-4845-836e-2746064d6f97:0 75d72902-d92e-4122-a7c1-69c1f6873b11:1 595dd6c9-8b07-4420-a9b9-06af0bbf74bc:2 08c1d532-2fb8-4832-973a-0fbaefd2663e:3 7c596c8e-1f89-402b-a12a-01c98e1beba8:4 b752af4a-a7df-4f0b-a0e7-ebdf0515f786:5 4eff2ba0-79ee-4f85-9c30-dffb8f00eee8:6 9446197a-7c8d-4dfe-82d3-4137cc0d0e87:7 61e10208-6947-45a0-a50a-8e2c908e6d7c:8 80332bcf-90a1-4198-bef3-24c4182af9b9:9 ' 1:9 256 -d 00:13:34.127 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:34.127 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:13:34.127 17:17:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:13:34.384 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:13:34.384 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:34.642 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:13:34.642 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:13:34.900 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=be4d6a1c-b2b8-4b50-a608-57b2b614927e 00:13:34.900 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:34.900 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:34.900 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:34.900 17:17:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_1 10 00:13:35.158 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9dc46038-edae-45c1-b324-ccd0cf36eae5 00:13:35.158 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9dc46038-edae-45c1-b324-ccd0cf36eae5:0 ' 00:13:35.158 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:35.159 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_2 10 00:13:35.417 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=85c61a07-3de5-4c69-8e42-c08b0cb311d0 00:13:35.417 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='85c61a07-3de5-4c69-8e42-c08b0cb311d0:1 ' 00:13:35.417 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:35.417 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_3 10 00:13:35.674 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=876a1ec1-8a9c-4764-810a-02cac57f04ff 00:13:35.674 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='876a1ec1-8a9c-4764-810a-02cac57f04ff:2 ' 00:13:35.674 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:35.674 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_4 10 00:13:35.933 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=16216735-35f8-40b1-a805-ed531f7175ce 00:13:35.933 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='16216735-35f8-40b1-a805-ed531f7175ce:3 ' 00:13:35.933 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:35.933 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_5 10 00:13:36.191 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=58e1d3f6-cbd3-417b-80bb-40f5675c56e7 00:13:36.191 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='58e1d3f6-cbd3-417b-80bb-40f5675c56e7:4 ' 00:13:36.191 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.191 17:17:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_6 10 00:13:36.449 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=deb6bdb3-b494-4da8-941e-85d57e5de728 00:13:36.449 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='deb6bdb3-b494-4da8-941e-85d57e5de728:5 ' 00:13:36.449 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.449 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_7 10 00:13:36.707 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b87d148b-0178-4326-8ee8-f3d73d321875 00:13:36.707 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b87d148b-0178-4326-8ee8-f3d73d321875:6 ' 00:13:36.707 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.707 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_8 10 00:13:36.965 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f8c01af4-0959-4077-ab97-621f982abdfb 00:13:36.965 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f8c01af4-0959-4077-ab97-621f982abdfb:7 ' 00:13:36.965 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:36.965 17:17:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_9 10 00:13:37.223 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=36018235-fe93-425e-b228-c186c98f524a 00:13:37.223 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='36018235-fe93-425e-b228-c186c98f524a:8 ' 00:13:37.223 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:37.223 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be4d6a1c-b2b8-4b50-a608-57b2b614927e lbd_10 10 00:13:37.480 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b10bf19d-bf20-4f6e-87c9-e3942815ee45 00:13:37.480 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b10bf19d-bf20-4f6e-87c9-e3942815ee45:9 ' 00:13:37.480 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias '9dc46038-edae-45c1-b324-ccd0cf36eae5:0 85c61a07-3de5-4c69-8e42-c08b0cb311d0:1 876a1ec1-8a9c-4764-810a-02cac57f04ff:2 16216735-35f8-40b1-a805-ed531f7175ce:3 58e1d3f6-cbd3-417b-80bb-40f5675c56e7:4 deb6bdb3-b494-4da8-941e-85d57e5de728:5 b87d148b-0178-4326-8ee8-f3d73d321875:6 f8c01af4-0959-4077-ab97-621f982abdfb:7 36018235-fe93-425e-b228-c186c98f524a:8 b10bf19d-bf20-4f6e-87c9-e3942815ee45:9 ' 1:10 256 -d 00:13:37.738 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:37.738 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:13:37.738 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:13:37.995 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:13:37.995 17:17:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:38.253 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:13:38.253 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:13:38.512 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=7e20e7d9-a526-4b8c-b14b-2090f618bd02 00:13:38.512 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:38.512 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:38.512 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:38.512 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_1 10 00:13:38.770 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c05754ea-a2ab-4a81-a655-0cce4a3615d7 00:13:38.770 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c05754ea-a2ab-4a81-a655-0cce4a3615d7:0 ' 00:13:38.771 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:38.771 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_2 10 00:13:39.029 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7eb37d5b-79f6-4b5d-9f2a-c8048fa081c8 00:13:39.029 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7eb37d5b-79f6-4b5d-9f2a-c8048fa081c8:1 ' 00:13:39.029 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.029 17:17:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_3 10 00:13:39.287 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ba15d14a-b299-4469-9734-2c8a0232f45a 00:13:39.287 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ba15d14a-b299-4469-9734-2c8a0232f45a:2 ' 00:13:39.287 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.287 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_4 10 00:13:39.545 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a220aef3-b3ed-4581-a3e3-4b4bdb65fab3 00:13:39.545 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a220aef3-b3ed-4581-a3e3-4b4bdb65fab3:3 ' 00:13:39.545 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.545 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_5 10 00:13:39.824 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=552bc376-00f7-414b-bcf7-4017e0596133 00:13:39.824 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='552bc376-00f7-414b-bcf7-4017e0596133:4 ' 00:13:39.824 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:39.824 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_6 10 00:13:40.081 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=781031fd-081d-4614-b65f-a3cb1b6bb030 00:13:40.081 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='781031fd-081d-4614-b65f-a3cb1b6bb030:5 ' 00:13:40.081 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.081 17:17:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_7 10 00:13:40.081 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=74638548-acdd-4932-a09a-605d8b1b1e6e 00:13:40.081 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='74638548-acdd-4932-a09a-605d8b1b1e6e:6 ' 00:13:40.082 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.082 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_8 10 00:13:40.340 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=57834719-24a6-40c2-b934-5354e847e98c 00:13:40.340 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='57834719-24a6-40c2-b934-5354e847e98c:7 ' 00:13:40.340 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.340 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_9 10 00:13:40.906 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cad4fd58-8e81-4eda-af44-6f26556e6fdd 00:13:40.906 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cad4fd58-8e81-4eda-af44-6f26556e6fdd:8 ' 00:13:40.906 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:40.906 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e20e7d9-a526-4b8c-b14b-2090f618bd02 lbd_10 10 00:13:41.164 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=50ca2881-cd47-4097-9a53-15e00c063d03 00:13:41.164 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='50ca2881-cd47-4097-9a53-15e00c063d03:9 ' 00:13:41.164 17:17:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias 'c05754ea-a2ab-4a81-a655-0cce4a3615d7:0 7eb37d5b-79f6-4b5d-9f2a-c8048fa081c8:1 ba15d14a-b299-4469-9734-2c8a0232f45a:2 a220aef3-b3ed-4581-a3e3-4b4bdb65fab3:3 552bc376-00f7-414b-bcf7-4017e0596133:4 781031fd-081d-4614-b65f-a3cb1b6bb030:5 74638548-acdd-4932-a09a-605d8b1b1e6e:6 57834719-24a6-40c2-b934-5354e847e98c:7 cad4fd58-8e81-4eda-af44-6f26556e6fdd:8 50ca2881-cd47-4097-9a53-15e00c063d03:9 ' 1:11 256 -d 00:13:41.164 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:41.164 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:13:41.164 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:13:41.422 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:13:41.422 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:41.987 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:13:41.988 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:13:42.245 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=c0b088c6-6d32-4d4e-828b-f8e23c7dc51e 00:13:42.245 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:42.245 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:42.245 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:42.245 17:18:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_1 10 00:13:42.505 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=16acbe4e-65de-40ea-af47-bfe2d29bf99e 00:13:42.505 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='16acbe4e-65de-40ea-af47-bfe2d29bf99e:0 ' 00:13:42.505 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:42.505 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_2 10 00:13:42.762 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=af324dae-9d14-44d7-8397-155b5f78a91f 00:13:42.762 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='af324dae-9d14-44d7-8397-155b5f78a91f:1 ' 00:13:42.762 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:42.762 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_3 10 00:13:43.021 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5fc5db93-c71b-4f3c-b7b4-13fc6f4463bf 00:13:43.021 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5fc5db93-c71b-4f3c-b7b4-13fc6f4463bf:2 ' 00:13:43.021 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:43.021 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_4 10 00:13:43.279 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=685598f5-9111-4cfa-84e7-250526c9d361 00:13:43.279 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='685598f5-9111-4cfa-84e7-250526c9d361:3 ' 00:13:43.279 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:43.279 17:18:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_5 10 00:13:43.279 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2e49935c-fce8-4cf6-9dd6-41e4d91a3019 00:13:43.279 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2e49935c-fce8-4cf6-9dd6-41e4d91a3019:4 ' 00:13:43.279 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:43.279 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_6 10 00:13:43.538 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=487c29ff-5378-47cd-9a92-fb97c24ae06d 00:13:43.538 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='487c29ff-5378-47cd-9a92-fb97c24ae06d:5 ' 00:13:43.538 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:43.538 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_7 10 00:13:43.796 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=512bdf02-4bf7-4080-bb0b-4636c276cb9f 00:13:43.796 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='512bdf02-4bf7-4080-bb0b-4636c276cb9f:6 ' 00:13:43.796 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:43.796 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_8 10 00:13:44.053 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f660e57a-359e-4531-b7b8-78ffcad73868 00:13:44.053 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f660e57a-359e-4531-b7b8-78ffcad73868:7 ' 00:13:44.053 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:44.053 17:18:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_9 10 00:13:44.311 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=53acb974-a7a7-4ee7-aee5-3e559b54bad1 00:13:44.311 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='53acb974-a7a7-4ee7-aee5-3e559b54bad1:8 ' 00:13:44.311 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:44.311 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0b088c6-6d32-4d4e-828b-f8e23c7dc51e lbd_10 10 00:13:44.568 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=daef87af-c627-47f7-8209-76a63fd4cc5a 00:13:44.568 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='daef87af-c627-47f7-8209-76a63fd4cc5a:9 ' 00:13:44.569 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias '16acbe4e-65de-40ea-af47-bfe2d29bf99e:0 af324dae-9d14-44d7-8397-155b5f78a91f:1 5fc5db93-c71b-4f3c-b7b4-13fc6f4463bf:2 685598f5-9111-4cfa-84e7-250526c9d361:3 2e49935c-fce8-4cf6-9dd6-41e4d91a3019:4 487c29ff-5378-47cd-9a92-fb97c24ae06d:5 512bdf02-4bf7-4080-bb0b-4636c276cb9f:6 f660e57a-359e-4531-b7b8-78ffcad73868:7 53acb974-a7a7-4ee7-aee5-3e559b54bad1:8 daef87af-c627-47f7-8209-76a63fd4cc5a:9 ' 1:12 256 -d 00:13:44.826 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:13:44.826 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.826 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:44.826 17:18:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:13:45.807 17:18:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:13:45.807 17:18:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.807 17:18:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 17:18:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:13:46.065 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:13:46.065 17:18:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:46.065 [2024-07-22 17:18:04.812098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.823791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.829878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.868564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.869768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.870017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.888720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.898247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.923815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.928935] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.961045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.971176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:04.991056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:05.005293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.065 [2024-07-22 17:18:05.006159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.016856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.040869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.041641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.077526] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.077657] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.079438] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.117560] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.121243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.122550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.126488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.142257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.166636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.166993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.174141] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.195666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.208337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.216617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.222315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.231952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.245832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.255864] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.324 [2024-07-22 17:18:05.270433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.352909] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.378418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.397580] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.400473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.464397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.471957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.582 [2024-07-22 17:18:05.510243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.545423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.565913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.566025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.577298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.585186] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.620838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.634176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.637669] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.661483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.682588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.689879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.705973] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.737790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.742238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.757010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.757863] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:46.840 [2024-07-22 17:18:05.772855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.803859] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.812507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.822631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.844597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.848301] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.098 [2024-07-22 17:18:05.872543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.099 [2024-07-22 17:18:05.875401] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.099 [2024-07-22 17:18:05.895920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.053127] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.216754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.234676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.255246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.260301] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.280697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.289846] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.355 [2024-07-22 17:18:06.304668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.305409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.332057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.334230] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.370624] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.378179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.386463] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.400520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.431702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.431832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.444663] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.483971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.486168] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.494655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.498431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.502053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.514387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.613 [2024-07-22 17:18:06.538496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.575450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.585153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.597902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.610031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.613442] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 [2024-07-22 17:18:06.652737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:47.921 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:47.921 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 17:18:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:13:47.921 [global] 00:13:47.921 thread=1 00:13:47.921 invalidate=1 00:13:47.921 rw=randwrite 00:13:47.921 time_based=1 00:13:47.921 runtime=10 00:13:47.921 ioengine=libaio 00:13:47.921 direct=1 00:13:47.921 bs=131072 00:13:47.921 iodepth=8 00:13:47.921 norandommap=0 00:13:47.921 numjobs=1 00:13:47.921 00:13:48.188 verify_dump=1 00:13:48.188 verify_backlog=512 00:13:48.188 verify_state_save=0 00:13:48.188 do_verify=1 00:13:48.188 verify=crc32c-intel 00:13:48.188 [job0] 00:13:48.188 filename=/dev/sdc 00:13:48.188 [job1] 00:13:48.188 filename=/dev/sdd 00:13:48.188 [job2] 00:13:48.188 filename=/dev/sdg 00:13:48.188 [job3] 00:13:48.188 filename=/dev/sdi 00:13:48.188 [job4] 00:13:48.188 filename=/dev/sdl 00:13:48.188 [job5] 00:13:48.188 filename=/dev/sdp 00:13:48.188 [job6] 00:13:48.188 filename=/dev/sdx 00:13:48.188 [job7] 00:13:48.188 filename=/dev/sdaa 00:13:48.188 [job8] 00:13:48.188 filename=/dev/sdae 00:13:48.188 [job9] 00:13:48.188 filename=/dev/sdaj 00:13:48.188 [job10] 00:13:48.188 filename=/dev/sdf 00:13:48.188 [job11] 00:13:48.188 filename=/dev/sdh 00:13:48.188 [job12] 00:13:48.188 filename=/dev/sdk 00:13:48.188 [job13] 00:13:48.188 filename=/dev/sdm 00:13:48.188 [job14] 00:13:48.188 filename=/dev/sdq 00:13:48.188 [job15] 00:13:48.188 filename=/dev/sdt 00:13:48.188 [job16] 00:13:48.188 filename=/dev/sdv 00:13:48.188 [job17] 00:13:48.188 filename=/dev/sdz 00:13:48.188 [job18] 00:13:48.188 filename=/dev/sdad 00:13:48.188 [job19] 00:13:48.188 filename=/dev/sdah 00:13:48.188 [job20] 00:13:48.188 filename=/dev/sdn 00:13:48.188 [job21] 00:13:48.188 filename=/dev/sds 00:13:48.188 [job22] 00:13:48.188 filename=/dev/sdw 00:13:48.188 [job23] 00:13:48.188 filename=/dev/sdab 00:13:48.188 [job24] 00:13:48.188 filename=/dev/sdaf 00:13:48.188 [job25] 00:13:48.188 filename=/dev/sdai 00:13:48.188 [job26] 00:13:48.188 filename=/dev/sdak 00:13:48.188 [job27] 00:13:48.188 filename=/dev/sdal 00:13:48.188 [job28] 00:13:48.188 filename=/dev/sdan 00:13:48.188 [job29] 00:13:48.188 filename=/dev/sdap 00:13:48.188 [job30] 00:13:48.188 filename=/dev/sdam 00:13:48.188 [job31] 00:13:48.188 filename=/dev/sdao 00:13:48.188 [job32] 00:13:48.188 filename=/dev/sdaq 00:13:48.188 [job33] 00:13:48.188 filename=/dev/sdav 00:13:48.188 [job34] 00:13:48.188 filename=/dev/sdaz 00:13:48.188 [job35] 00:13:48.188 filename=/dev/sdbc 00:13:48.188 [job36] 00:13:48.188 filename=/dev/sdbf 00:13:48.188 [job37] 00:13:48.188 filename=/dev/sdbi 00:13:48.188 [job38] 00:13:48.188 filename=/dev/sdbn 00:13:48.188 [job39] 00:13:48.188 filename=/dev/sdbq 00:13:48.188 [job40] 00:13:48.188 filename=/dev/sdas 00:13:48.188 [job41] 00:13:48.188 filename=/dev/sdau 00:13:48.188 [job42] 00:13:48.188 filename=/dev/sdax 00:13:48.188 [job43] 00:13:48.188 filename=/dev/sdba 00:13:48.188 [job44] 00:13:48.188 filename=/dev/sdbd 00:13:48.188 [job45] 00:13:48.188 filename=/dev/sdbg 00:13:48.188 [job46] 00:13:48.188 filename=/dev/sdbj 00:13:48.188 [job47] 00:13:48.188 filename=/dev/sdbl 00:13:48.188 [job48] 00:13:48.188 filename=/dev/sdbo 00:13:48.188 [job49] 00:13:48.188 filename=/dev/sdbr 00:13:48.188 [job50] 00:13:48.188 filename=/dev/sdar 00:13:48.188 [job51] 00:13:48.188 filename=/dev/sdat 00:13:48.188 [job52] 00:13:48.188 filename=/dev/sdaw 00:13:48.188 [job53] 00:13:48.188 filename=/dev/sday 00:13:48.188 [job54] 00:13:48.188 filename=/dev/sdbb 00:13:48.188 [job55] 00:13:48.188 filename=/dev/sdbe 00:13:48.188 [job56] 00:13:48.188 filename=/dev/sdbh 00:13:48.188 [job57] 00:13:48.188 filename=/dev/sdbk 00:13:48.188 [job58] 00:13:48.188 filename=/dev/sdbm 00:13:48.188 [job59] 00:13:48.188 filename=/dev/sdbp 00:13:48.188 [job60] 00:13:48.188 filename=/dev/sdbs 00:13:48.188 [job61] 00:13:48.188 filename=/dev/sdbt 00:13:48.188 [job62] 00:13:48.188 filename=/dev/sdbu 00:13:48.188 [job63] 00:13:48.188 filename=/dev/sdbx 00:13:48.188 [job64] 00:13:48.188 filename=/dev/sdbz 00:13:48.188 [job65] 00:13:48.188 filename=/dev/sdcc 00:13:48.188 [job66] 00:13:48.188 filename=/dev/sdcg 00:13:48.189 [job67] 00:13:48.189 filename=/dev/sdcl 00:13:48.189 [job68] 00:13:48.189 filename=/dev/sdco 00:13:48.189 [job69] 00:13:48.189 filename=/dev/sdcr 00:13:48.189 [job70] 00:13:48.189 filename=/dev/sdbv 00:13:48.189 [job71] 00:13:48.189 filename=/dev/sdby 00:13:48.189 [job72] 00:13:48.189 filename=/dev/sdcb 00:13:48.189 [job73] 00:13:48.189 filename=/dev/sdcd 00:13:48.189 [job74] 00:13:48.189 filename=/dev/sdcf 00:13:48.189 [job75] 00:13:48.189 filename=/dev/sdci 00:13:48.189 [job76] 00:13:48.189 filename=/dev/sdcj 00:13:48.189 [job77] 00:13:48.189 filename=/dev/sdcn 00:13:48.189 [job78] 00:13:48.189 filename=/dev/sdcp 00:13:48.189 [job79] 00:13:48.189 filename=/dev/sdct 00:13:48.189 [job80] 00:13:48.189 filename=/dev/sdbw 00:13:48.189 [job81] 00:13:48.189 filename=/dev/sdca 00:13:48.189 [job82] 00:13:48.189 filename=/dev/sdce 00:13:48.189 [job83] 00:13:48.189 filename=/dev/sdch 00:13:48.189 [job84] 00:13:48.189 filename=/dev/sdck 00:13:48.189 [job85] 00:13:48.189 filename=/dev/sdcm 00:13:48.189 [job86] 00:13:48.189 filename=/dev/sdcq 00:13:48.189 [job87] 00:13:48.189 filename=/dev/sdcs 00:13:48.189 [job88] 00:13:48.189 filename=/dev/sdcu 00:13:48.189 [job89] 00:13:48.189 filename=/dev/sdcv 00:13:48.189 [job90] 00:13:48.189 filename=/dev/sda 00:13:48.189 [job91] 00:13:48.189 filename=/dev/sdb 00:13:48.189 [job92] 00:13:48.189 filename=/dev/sde 00:13:48.189 [job93] 00:13:48.189 filename=/dev/sdj 00:13:48.189 [job94] 00:13:48.189 filename=/dev/sdo 00:13:48.189 [job95] 00:13:48.189 filename=/dev/sdr 00:13:48.189 [job96] 00:13:48.189 filename=/dev/sdu 00:13:48.189 [job97] 00:13:48.189 filename=/dev/sdy 00:13:48.189 [job98] 00:13:48.189 filename=/dev/sdac 00:13:48.189 [job99] 00:13:48.189 filename=/dev/sdag 00:13:49.563 queue_depth set to 113 (sdc) 00:13:49.563 queue_depth set to 113 (sdd) 00:13:49.563 queue_depth set to 113 (sdg) 00:13:49.563 queue_depth set to 113 (sdi) 00:13:49.563 queue_depth set to 113 (sdl) 00:13:49.564 queue_depth set to 113 (sdp) 00:13:49.564 queue_depth set to 113 (sdx) 00:13:49.564 queue_depth set to 113 (sdaa) 00:13:49.564 queue_depth set to 113 (sdae) 00:13:49.564 queue_depth set to 113 (sdaj) 00:13:49.564 queue_depth set to 113 (sdf) 00:13:49.564 queue_depth set to 113 (sdh) 00:13:49.822 queue_depth set to 113 (sdk) 00:13:49.822 queue_depth set to 113 (sdm) 00:13:49.822 queue_depth set to 113 (sdq) 00:13:49.822 queue_depth set to 113 (sdt) 00:13:49.822 queue_depth set to 113 (sdv) 00:13:49.822 queue_depth set to 113 (sdz) 00:13:49.822 queue_depth set to 113 (sdad) 00:13:49.822 queue_depth set to 113 (sdah) 00:13:49.822 queue_depth set to 113 (sdn) 00:13:49.822 queue_depth set to 113 (sds) 00:13:49.822 queue_depth set to 113 (sdw) 00:13:49.822 queue_depth set to 113 (sdab) 00:13:50.080 queue_depth set to 113 (sdaf) 00:13:50.080 queue_depth set to 113 (sdai) 00:13:50.080 queue_depth set to 113 (sdak) 00:13:50.080 queue_depth set to 113 (sdal) 00:13:50.080 queue_depth set to 113 (sdan) 00:13:50.080 queue_depth set to 113 (sdap) 00:13:50.080 queue_depth set to 113 (sdam) 00:13:50.080 queue_depth set to 113 (sdao) 00:13:50.080 queue_depth set to 113 (sdaq) 00:13:50.080 queue_depth set to 113 (sdav) 00:13:50.080 queue_depth set to 113 (sdaz) 00:13:50.080 queue_depth set to 113 (sdbc) 00:13:50.080 queue_depth set to 113 (sdbf) 00:13:50.338 queue_depth set to 113 (sdbi) 00:13:50.338 queue_depth set to 113 (sdbn) 00:13:50.338 queue_depth set to 113 (sdbq) 00:13:50.338 queue_depth set to 113 (sdas) 00:13:50.338 queue_depth set to 113 (sdau) 00:13:50.338 queue_depth set to 113 (sdax) 00:13:50.338 queue_depth set to 113 (sdba) 00:13:50.338 queue_depth set to 113 (sdbd) 00:13:50.338 queue_depth set to 113 (sdbg) 00:13:50.338 queue_depth set to 113 (sdbj) 00:13:50.338 queue_depth set to 113 (sdbl) 00:13:50.338 queue_depth set to 113 (sdbo) 00:13:50.595 queue_depth set to 113 (sdbr) 00:13:50.595 queue_depth set to 113 (sdar) 00:13:50.595 queue_depth set to 113 (sdat) 00:13:50.595 queue_depth set to 113 (sdaw) 00:13:50.595 queue_depth set to 113 (sday) 00:13:50.595 queue_depth set to 113 (sdbb) 00:13:50.595 queue_depth set to 113 (sdbe) 00:13:50.595 queue_depth set to 113 (sdbh) 00:13:50.595 queue_depth set to 113 (sdbk) 00:13:50.595 queue_depth set to 113 (sdbm) 00:13:50.595 queue_depth set to 113 (sdbp) 00:13:50.595 queue_depth set to 113 (sdbs) 00:13:50.877 queue_depth set to 113 (sdbt) 00:13:50.877 queue_depth set to 113 (sdbu) 00:13:50.877 queue_depth set to 113 (sdbx) 00:13:50.877 queue_depth set to 113 (sdbz) 00:13:50.877 queue_depth set to 113 (sdcc) 00:13:50.877 queue_depth set to 113 (sdcg) 00:13:50.877 queue_depth set to 113 (sdcl) 00:13:50.877 queue_depth set to 113 (sdco) 00:13:50.877 queue_depth set to 113 (sdcr) 00:13:50.877 queue_depth set to 113 (sdbv) 00:13:50.877 queue_depth set to 113 (sdby) 00:13:51.135 queue_depth set to 113 (sdcb) 00:13:51.135 queue_depth set to 113 (sdcd) 00:13:51.135 queue_depth set to 113 (sdcf) 00:13:51.135 queue_depth set to 113 (sdci) 00:13:51.135 queue_depth set to 113 (sdcj) 00:13:51.135 queue_depth set to 113 (sdcn) 00:13:51.135 queue_depth set to 113 (sdcp) 00:13:51.135 queue_depth set to 113 (sdct) 00:13:51.393 queue_depth set to 113 (sdbw) 00:13:51.393 queue_depth set to 113 (sdca) 00:13:51.393 queue_depth set to 113 (sdce) 00:13:51.393 queue_depth set to 113 (sdch) 00:13:51.393 queue_depth set to 113 (sdck) 00:13:51.393 queue_depth set to 113 (sdcm) 00:13:51.393 queue_depth set to 113 (sdcq) 00:13:51.393 queue_depth set to 113 (sdcs) 00:13:51.393 queue_depth set to 113 (sdcu) 00:13:51.652 queue_depth set to 113 (sdcv) 00:13:51.652 queue_depth set to 113 (sda) 00:13:51.652 queue_depth set to 113 (sdb) 00:13:51.652 queue_depth set to 113 (sde) 00:13:51.652 queue_depth set to 113 (sdj) 00:13:51.652 queue_depth set to 113 (sdo) 00:13:51.652 queue_depth set to 113 (sdr) 00:13:51.652 queue_depth set to 113 (sdu) 00:13:51.910 queue_depth set to 113 (sdy) 00:13:51.910 queue_depth set to 113 (sdac) 00:13:51.910 queue_depth set to 113 (sdag) 00:13:51.910 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:51.910 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.169 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:52.170 fio-3.35 00:13:52.170 Starting 100 threads 00:13:52.170 [2024-07-22 17:18:11.003697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.008037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.012388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.015799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.018256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.020820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.023336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.025786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.028711] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.031271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.033815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.036650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.039195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.041533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.044021] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.046498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.049163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.051485] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.053879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.056394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.059658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.063231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.067103] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.071390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.074975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.078506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.081087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.084350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.090650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.093505] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.096381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.098580] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.100712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.102783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.104989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.107016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.109165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.111538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.113809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.170 [2024-07-22 17:18:11.116028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.118088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.120100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.122213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.124217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.126260] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.129494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.131720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.133774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.136070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.138437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.140392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.142736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.145341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.147451] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.149603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.151896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.154047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.156184] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.158197] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.160443] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.164335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.168577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.171857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.178273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.180638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.184336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.187140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.189585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.192909] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.198821] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.201655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.203774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.206010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.207898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.209720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.211603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.213437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.215308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.217061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.219075] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.220852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.222705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.224454] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.226234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.228074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.230190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.232249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.234878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.237620] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.240273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.242340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.244217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.246086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.247989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.250674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.253132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.255419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.257248] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.261427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:52.428 [2024-07-22 17:18:11.263701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:15.757797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:15.918167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:15.947239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:16.098546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:16.206393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:16.299960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:16.434715] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.708 [2024-07-22 17:18:16.603217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.966 [2024-07-22 17:18:16.696753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:57.966 [2024-07-22 17:18:16.760677] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.223 [2024-07-22 17:18:16.964129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.223 [2024-07-22 17:18:17.158907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.481 [2024-07-22 17:18:17.265350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.481 [2024-07-22 17:18:17.345381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.739 [2024-07-22 17:18:17.540392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.739 [2024-07-22 17:18:17.568904] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.739 [2024-07-22 17:18:17.594154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.739 [2024-07-22 17:18:17.656615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.998 [2024-07-22 17:18:17.737213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.998 [2024-07-22 17:18:17.782902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.998 [2024-07-22 17:18:17.820600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.998 [2024-07-22 17:18:17.857228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.255 [2024-07-22 17:18:18.005377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.255 [2024-07-22 17:18:18.100158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.255 [2024-07-22 17:18:18.204668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.514 [2024-07-22 17:18:18.298640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.514 [2024-07-22 17:18:18.369224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.514 [2024-07-22 17:18:18.437336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.772 [2024-07-22 17:18:18.550577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.772 [2024-07-22 17:18:18.594804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.772 [2024-07-22 17:18:18.643168] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.080 [2024-07-22 17:18:18.736046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.080 [2024-07-22 17:18:18.822261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.080 [2024-07-22 17:18:18.924021] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.080 [2024-07-22 17:18:18.987578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.080 [2024-07-22 17:18:19.022325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.338 [2024-07-22 17:18:19.065318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.338 [2024-07-22 17:18:19.251580] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.596 [2024-07-22 17:18:19.333463] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.596 [2024-07-22 17:18:19.394982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.596 [2024-07-22 17:18:19.503423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.854 [2024-07-22 17:18:19.586525] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.854 [2024-07-22 17:18:19.682394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:00.854 [2024-07-22 17:18:19.772871] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.112 [2024-07-22 17:18:19.927164] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.112 [2024-07-22 17:18:20.036536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.370 [2024-07-22 17:18:20.156135] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.370 [2024-07-22 17:18:20.231505] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.628 [2024-07-22 17:18:20.340031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.628 [2024-07-22 17:18:20.404345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.628 [2024-07-22 17:18:20.462282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.628 [2024-07-22 17:18:20.520212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.628 [2024-07-22 17:18:20.566189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.886 [2024-07-22 17:18:20.662591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.886 [2024-07-22 17:18:20.742721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.886 [2024-07-22 17:18:20.767954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.886 [2024-07-22 17:18:20.819672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.143 [2024-07-22 17:18:20.906519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.143 [2024-07-22 17:18:20.944803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.143 [2024-07-22 17:18:20.974222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.143 [2024-07-22 17:18:21.027686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.401 [2024-07-22 17:18:21.127469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.401 [2024-07-22 17:18:21.194672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.401 [2024-07-22 17:18:21.267089] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.659 [2024-07-22 17:18:21.372826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.659 [2024-07-22 17:18:21.461138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.659 [2024-07-22 17:18:21.534950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.659 [2024-07-22 17:18:21.605600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:02.917 [2024-07-22 17:18:21.699013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.175 [2024-07-22 17:18:21.905176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.175 [2024-07-22 17:18:21.992694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.175 [2024-07-22 17:18:22.070347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.433 [2024-07-22 17:18:22.160965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.433 [2024-07-22 17:18:22.314373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.691 [2024-07-22 17:18:22.466756] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.691 [2024-07-22 17:18:22.593555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.949 [2024-07-22 17:18:22.692234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:03.949 [2024-07-22 17:18:22.792289] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.208 [2024-07-22 17:18:22.900469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.208 [2024-07-22 17:18:22.957337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.208 [2024-07-22 17:18:23.025801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.208 [2024-07-22 17:18:23.107193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.476 [2024-07-22 17:18:23.225753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.477 [2024-07-22 17:18:23.293700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.477 [2024-07-22 17:18:23.346430] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.477 [2024-07-22 17:18:23.395578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.736 [2024-07-22 17:18:23.440197] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.736 [2024-07-22 17:18:23.525291] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.736 [2024-07-22 17:18:23.568758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.736 [2024-07-22 17:18:23.619861] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.994 [2024-07-22 17:18:23.717758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.994 [2024-07-22 17:18:23.800914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.994 [2024-07-22 17:18:23.914677] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.252 [2024-07-22 17:18:24.039955] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.252 [2024-07-22 17:18:24.120135] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.513 [2024-07-22 17:18:24.245645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.513 [2024-07-22 17:18:24.324381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.513 [2024-07-22 17:18:24.356981] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.513 [2024-07-22 17:18:24.434390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.772 [2024-07-22 17:18:24.557958] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:05.772 [2024-07-22 17:18:24.676104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.030 [2024-07-22 17:18:24.814653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.030 [2024-07-22 17:18:24.887276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.030 [2024-07-22 17:18:24.947915] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:24.993995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.028393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.039910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.044423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.047869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.052328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.054554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.056623] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.058970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.061619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.064096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.066260] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.068399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.070541] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.072789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.075146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.077359] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.079771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.081919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.084227] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.086452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.088668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.090957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.093041] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 [2024-07-22 17:18:25.095180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.299 00:14:06.299 job0: (groupid=0, jobs=1): err= 0: pid=70864: Mon Jul 22 17:18:25 2024 00:14:06.299 read: IOPS=65, BW=8357KiB/s (8557kB/s)(68.2MiB/8363msec) 00:14:06.299 slat (usec): min=7, max=1164, avg=65.28, stdev=136.40 00:14:06.299 clat (msec): min=6, max=253, avg=31.34, stdev=36.77 00:14:06.299 lat (msec): min=6, max=253, avg=31.40, stdev=36.77 00:14:06.299 clat percentiles (msec): 00:14:06.299 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:14:06.299 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 21], 60.00th=[ 24], 00:14:06.299 | 70.00th=[ 29], 80.00th=[ 43], 90.00th=[ 66], 95.00th=[ 93], 00:14:06.299 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 255], 99.95th=[ 255], 00:14:06.299 | 99.99th=[ 255] 00:14:06.299 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7852msec); 0 zone resets 00:14:06.299 slat (usec): min=37, max=2351, avg=132.61, stdev=183.22 00:14:06.299 clat (msec): min=47, max=314, avg=97.30, stdev=37.39 00:14:06.299 lat (msec): min=47, max=314, avg=97.44, stdev=37.39 00:14:06.299 clat percentiles (msec): 00:14:06.299 | 1.00th=[ 54], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:14:06.299 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 95], 00:14:06.299 | 70.00th=[ 104], 80.00th=[ 113], 90.00th=[ 133], 95.00th=[ 169], 00:14:06.299 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:14:06.299 | 99.99th=[ 313] 00:14:06.299 bw ( KiB/s): min= 1280, max=13824, per=0.92%, avg=8984.83, stdev=4168.87, samples=18 00:14:06.299 iops : min= 10, max= 108, avg=70.00, stdev=32.64, samples=18 00:14:06.299 lat (msec) : 10=4.72%, 20=17.62%, 50=17.71%, 100=39.97%, 250=18.72% 00:14:06.299 lat (msec) : 500=1.26% 00:14:06.299 cpu : usr=0.48%, sys=0.23%, ctx=1951, majf=0, minf=3 00:14:06.299 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.299 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.299 issued rwts: total=546,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.299 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.299 job1: (groupid=0, jobs=1): err= 0: pid=70865: Mon Jul 22 17:18:25 2024 00:14:06.299 read: IOPS=75, BW=9617KiB/s (9848kB/s)(77.4MiB/8239msec) 00:14:06.299 slat (usec): min=6, max=1040, avg=73.42, stdev=124.81 00:14:06.299 clat (usec): min=8249, max=85277, avg=23542.49, stdev=11018.73 00:14:06.299 lat (usec): min=8261, max=85920, avg=23615.91, stdev=11020.00 00:14:06.299 clat percentiles (usec): 00:14:06.299 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[12518], 20.00th=[15139], 00:14:06.300 | 30.00th=[17171], 40.00th=[19792], 50.00th=[22676], 60.00th=[23987], 00:14:06.300 | 70.00th=[25560], 80.00th=[28967], 90.00th=[34866], 95.00th=[42730], 00:14:06.300 | 99.00th=[81265], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:14:06.300 | 99.99th=[85459] 00:14:06.300 write: IOPS=78, BW=9.79MiB/s (10.3MB/s)(80.0MiB/8169msec); 0 zone resets 00:14:06.300 slat (usec): min=38, max=5182, avg=137.21, stdev=266.03 00:14:06.300 clat (msec): min=54, max=307, avg=101.12, stdev=36.33 00:14:06.300 lat (msec): min=54, max=307, avg=101.26, stdev=36.34 00:14:06.300 clat percentiles (msec): 00:14:06.300 | 1.00th=[ 62], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:14:06.300 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 100], 00:14:06.300 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 171], 00:14:06.300 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 309], 00:14:06.300 | 99.99th=[ 309] 00:14:06.300 bw ( KiB/s): min= 1021, max=13056, per=0.87%, avg=8527.26, stdev=3930.69, samples=19 00:14:06.300 iops : min= 7, max= 102, avg=66.42, stdev=30.99, samples=19 00:14:06.300 lat (msec) : 10=1.51%, 20=18.51%, 50=27.88%, 100=32.88%, 250=18.82% 00:14:06.300 lat (msec) : 500=0.40% 00:14:06.300 cpu : usr=0.53%, sys=0.20%, ctx=2151, majf=0, minf=5 00:14:06.300 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 issued rwts: total=619,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.300 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.300 job2: (groupid=0, jobs=1): err= 0: pid=70875: Mon Jul 22 17:18:25 2024 00:14:06.300 read: IOPS=72, BW=9306KiB/s (9529kB/s)(80.0MiB/8803msec) 00:14:06.300 slat (usec): min=6, max=971, avg=58.71, stdev=111.68 00:14:06.300 clat (usec): min=5244, max=91028, avg=13824.08, stdev=8954.78 00:14:06.300 lat (usec): min=5270, max=91039, avg=13882.79, stdev=8958.16 00:14:06.300 clat percentiles (usec): 00:14:06.300 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7898], 00:14:06.300 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[12518], 00:14:06.300 | 70.00th=[14484], 80.00th=[17957], 90.00th=[23987], 95.00th=[27657], 00:14:06.300 | 99.00th=[58459], 99.50th=[64226], 99.90th=[90702], 99.95th=[90702], 00:14:06.300 | 99.99th=[90702] 00:14:06.300 write: IOPS=74, BW=9509KiB/s (9738kB/s)(83.4MiB/8978msec); 0 zone resets 00:14:06.300 slat (usec): min=36, max=4346, avg=147.19, stdev=269.64 00:14:06.300 clat (msec): min=5, max=338, avg=106.86, stdev=56.44 00:14:06.300 lat (msec): min=6, max=338, avg=107.00, stdev=56.45 00:14:06.300 clat percentiles (msec): 00:14:06.300 | 1.00th=[ 10], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 71], 00:14:06.300 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 100], 00:14:06.300 | 70.00th=[ 113], 80.00th=[ 133], 90.00th=[ 199], 95.00th=[ 224], 00:14:06.300 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 338], 00:14:06.300 | 99.99th=[ 338] 00:14:06.300 bw ( KiB/s): min= 1536, max=18981, per=0.86%, avg=8447.95, stdev=4577.58, samples=20 00:14:06.300 iops : min= 12, max= 148, avg=65.90, stdev=35.72, samples=20 00:14:06.300 lat (msec) : 10=21.04%, 20=21.81%, 50=7.88%, 100=29.07%, 250=18.36% 00:14:06.300 lat (msec) : 500=1.84% 00:14:06.300 cpu : usr=0.45%, sys=0.30%, ctx=2198, majf=0, minf=1 00:14:06.300 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 issued rwts: total=640,667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.300 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.300 job3: (groupid=0, jobs=1): err= 0: pid=70881: Mon Jul 22 17:18:25 2024 00:14:06.300 read: IOPS=71, BW=9201KiB/s (9422kB/s)(80.0MiB/8903msec) 00:14:06.300 slat (usec): min=6, max=1233, avg=57.66, stdev=126.21 00:14:06.300 clat (usec): min=4386, max=71466, avg=11954.25, stdev=7604.99 00:14:06.300 lat (usec): min=4418, max=71475, avg=12011.91, stdev=7601.34 00:14:06.300 clat percentiles (usec): 00:14:06.300 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 7242], 00:14:06.300 | 30.00th=[ 8291], 40.00th=[ 9503], 50.00th=[10683], 60.00th=[11469], 00:14:06.300 | 70.00th=[13304], 80.00th=[14877], 90.00th=[17695], 95.00th=[20841], 00:14:06.300 | 99.00th=[58983], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:14:06.300 | 99.99th=[71828] 00:14:06.300 write: IOPS=74, BW=9578KiB/s (9808kB/s)(85.2MiB/9114msec); 0 zone resets 00:14:06.300 slat (usec): min=38, max=7389, avg=136.40, stdev=322.35 00:14:06.300 clat (usec): min=866, max=350079, avg=106157.43, stdev=54410.33 00:14:06.300 lat (usec): min=940, max=350172, avg=106293.83, stdev=54433.38 00:14:06.300 clat percentiles (msec): 00:14:06.300 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 68], 20.00th=[ 69], 00:14:06.300 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 108], 00:14:06.300 | 70.00th=[ 123], 80.00th=[ 144], 90.00th=[ 188], 95.00th=[ 218], 00:14:06.300 | 99.00th=[ 288], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:14:06.300 | 99.99th=[ 351] 00:14:06.300 bw ( KiB/s): min= 3840, max=21803, per=0.88%, avg=8640.30, stdev=4644.47, samples=20 00:14:06.300 iops : min= 30, max= 170, avg=67.40, stdev=36.23, samples=20 00:14:06.300 lat (usec) : 1000=0.08% 00:14:06.300 lat (msec) : 2=0.15%, 10=22.54%, 20=25.79%, 50=2.57%, 100=25.57% 00:14:06.300 lat (msec) : 250=22.39%, 500=0.91% 00:14:06.300 cpu : usr=0.45%, sys=0.28%, ctx=2153, majf=0, minf=3 00:14:06.300 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 issued rwts: total=640,682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.300 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.300 job4: (groupid=0, jobs=1): err= 0: pid=70890: Mon Jul 22 17:18:25 2024 00:14:06.300 read: IOPS=58, BW=7463KiB/s (7642kB/s)(60.0MiB/8233msec) 00:14:06.300 slat (usec): min=6, max=1279, avg=59.92, stdev=127.57 00:14:06.300 clat (msec): min=5, max=126, avg=24.24, stdev=20.72 00:14:06.300 lat (msec): min=5, max=126, avg=24.30, stdev=20.72 00:14:06.300 clat percentiles (msec): 00:14:06.300 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:14:06.300 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 23], 00:14:06.300 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 39], 95.00th=[ 83], 00:14:06.300 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 127], 99.95th=[ 127], 00:14:06.300 | 99.99th=[ 127] 00:14:06.300 write: IOPS=65, BW=8446KiB/s (8648kB/s)(70.8MiB/8578msec); 0 zone resets 00:14:06.300 slat (usec): min=37, max=10282, avg=139.73, stdev=449.99 00:14:06.300 clat (msec): min=63, max=464, avg=120.12, stdev=63.90 00:14:06.300 lat (msec): min=63, max=464, avg=120.26, stdev=63.91 00:14:06.300 clat percentiles (msec): 00:14:06.300 | 1.00th=[ 68], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 78], 00:14:06.300 | 30.00th=[ 86], 40.00th=[ 92], 50.00th=[ 100], 60.00th=[ 108], 00:14:06.300 | 70.00th=[ 121], 80.00th=[ 142], 90.00th=[ 203], 95.00th=[ 264], 00:14:06.300 | 99.00th=[ 397], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 464], 00:14:06.300 | 99.99th=[ 464] 00:14:06.300 bw ( KiB/s): min= 768, max=13056, per=0.77%, avg=7514.16, stdev=3793.67, samples=19 00:14:06.300 iops : min= 6, max= 102, avg=58.53, stdev=29.77, samples=19 00:14:06.300 lat (msec) : 10=7.36%, 20=16.83%, 50=18.16%, 100=30.21%, 250=24.47% 00:14:06.300 lat (msec) : 500=2.96% 00:14:06.300 cpu : usr=0.38%, sys=0.23%, ctx=1767, majf=0, minf=7 00:14:06.300 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.300 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 issued rwts: total=480,566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.301 job5: (groupid=0, jobs=1): err= 0: pid=71037: Mon Jul 22 17:18:25 2024 00:14:06.301 read: IOPS=74, BW=9561KiB/s (9791kB/s)(80.0MiB/8568msec) 00:14:06.301 slat (usec): min=8, max=1140, avg=66.49, stdev=135.39 00:14:06.301 clat (usec): min=5792, max=42248, avg=11004.47, stdev=5709.74 00:14:06.301 lat (usec): min=5815, max=42258, avg=11070.96, stdev=5705.22 00:14:06.301 clat percentiles (usec): 00:14:06.301 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7111], 00:14:06.301 | 30.00th=[ 7832], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10290], 00:14:06.301 | 70.00th=[11076], 80.00th=[13304], 90.00th=[17433], 95.00th=[21627], 00:14:06.301 | 99.00th=[38011], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:14:06.301 | 99.99th=[42206] 00:14:06.301 write: IOPS=72, BW=9337KiB/s (9561kB/s)(83.6MiB/9171msec); 0 zone resets 00:14:06.301 slat (usec): min=38, max=2150, avg=128.57, stdev=174.12 00:14:06.301 clat (msec): min=30, max=376, avg=108.88, stdev=53.87 00:14:06.301 lat (msec): min=30, max=376, avg=109.01, stdev=53.86 00:14:06.301 clat percentiles (msec): 00:14:06.301 | 1.00th=[ 37], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 70], 00:14:06.301 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 103], 00:14:06.301 | 70.00th=[ 116], 80.00th=[ 140], 90.00th=[ 184], 95.00th=[ 222], 00:14:06.301 | 99.00th=[ 296], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.301 | 99.99th=[ 376] 00:14:06.301 bw ( KiB/s): min= 1024, max=14848, per=0.86%, avg=8461.30, stdev=4318.10, samples=20 00:14:06.301 iops : min= 8, max= 116, avg=66.00, stdev=33.78, samples=20 00:14:06.301 lat (msec) : 10=28.34%, 20=16.65%, 50=4.51%, 100=29.26%, 250=19.79% 00:14:06.301 lat (msec) : 500=1.45% 00:14:06.301 cpu : usr=0.53%, sys=0.25%, ctx=2115, majf=0, minf=3 00:14:06.301 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 issued rwts: total=640,669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.301 job6: (groupid=0, jobs=1): err= 0: pid=71131: Mon Jul 22 17:18:25 2024 00:14:06.301 read: IOPS=57, BW=7300KiB/s (7476kB/s)(60.0MiB/8416msec) 00:14:06.301 slat (usec): min=6, max=1033, avg=79.66, stdev=144.47 00:14:06.301 clat (usec): min=8575, max=71529, avg=24952.78, stdev=14247.98 00:14:06.301 lat (usec): min=8764, max=71549, avg=25032.44, stdev=14238.27 00:14:06.301 clat percentiles (usec): 00:14:06.301 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11994], 20.00th=[14222], 00:14:06.301 | 30.00th=[15401], 40.00th=[16909], 50.00th=[19530], 60.00th=[23200], 00:14:06.301 | 70.00th=[28181], 80.00th=[34866], 90.00th=[45351], 95.00th=[57410], 00:14:06.301 | 99.00th=[68682], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:14:06.301 | 99.99th=[71828] 00:14:06.301 write: IOPS=71, BW=9166KiB/s (9386kB/s)(76.5MiB/8546msec); 0 zone resets 00:14:06.301 slat (usec): min=39, max=7770, avg=141.09, stdev=349.82 00:14:06.301 clat (msec): min=38, max=493, avg=110.79, stdev=55.80 00:14:06.301 lat (msec): min=38, max=493, avg=110.93, stdev=55.80 00:14:06.301 clat percentiles (msec): 00:14:06.301 | 1.00th=[ 44], 5.00th=[ 69], 10.00th=[ 69], 20.00th=[ 74], 00:14:06.301 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 105], 00:14:06.301 | 70.00th=[ 112], 80.00th=[ 127], 90.00th=[ 169], 95.00th=[ 228], 00:14:06.301 | 99.00th=[ 351], 99.50th=[ 368], 99.90th=[ 493], 99.95th=[ 493], 00:14:06.301 | 99.99th=[ 493] 00:14:06.301 bw ( KiB/s): min= 1024, max=13312, per=0.83%, avg=8136.74, stdev=3992.09, samples=19 00:14:06.301 iops : min= 8, max= 104, avg=63.16, stdev=31.21, samples=19 00:14:06.301 lat (msec) : 10=1.37%, 20=21.34%, 50=18.86%, 100=33.61%, 250=22.89% 00:14:06.301 lat (msec) : 500=1.92% 00:14:06.301 cpu : usr=0.41%, sys=0.23%, ctx=1853, majf=0, minf=7 00:14:06.301 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 issued rwts: total=480,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.301 job7: (groupid=0, jobs=1): err= 0: pid=71215: Mon Jul 22 17:18:25 2024 00:14:06.301 read: IOPS=61, BW=7875KiB/s (8064kB/s)(60.0MiB/7802msec) 00:14:06.301 slat (usec): min=6, max=1198, avg=77.88, stdev=146.44 00:14:06.301 clat (usec): min=7519, max=63517, avg=16330.08, stdev=7550.64 00:14:06.301 lat (usec): min=7564, max=63525, avg=16407.96, stdev=7538.36 00:14:06.301 clat percentiles (usec): 00:14:06.301 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[11338], 00:14:06.301 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14484], 60.00th=[15926], 00:14:06.301 | 70.00th=[17695], 80.00th=[19792], 90.00th=[24773], 95.00th=[27657], 00:14:06.301 | 99.00th=[48497], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:14:06.301 | 99.99th=[63701] 00:14:06.301 write: IOPS=70, BW=9013KiB/s (9229kB/s)(80.0MiB/9089msec); 0 zone resets 00:14:06.301 slat (usec): min=39, max=26525, avg=195.98, stdev=1091.01 00:14:06.301 clat (msec): min=17, max=450, avg=112.09, stdev=58.77 00:14:06.301 lat (msec): min=19, max=450, avg=112.28, stdev=58.74 00:14:06.301 clat percentiles (msec): 00:14:06.301 | 1.00th=[ 27], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 73], 00:14:06.301 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 103], 00:14:06.301 | 70.00th=[ 113], 80.00th=[ 131], 90.00th=[ 186], 95.00th=[ 224], 00:14:06.301 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 451], 99.95th=[ 451], 00:14:06.301 | 99.99th=[ 451] 00:14:06.301 bw ( KiB/s): min= 1536, max=14108, per=0.82%, avg=8097.90, stdev=4025.43, samples=20 00:14:06.301 iops : min= 12, max= 110, avg=63.10, stdev=31.49, samples=20 00:14:06.301 lat (msec) : 10=5.54%, 20=29.46%, 50=8.21%, 100=32.50%, 250=22.32% 00:14:06.301 lat (msec) : 500=1.96% 00:14:06.301 cpu : usr=0.41%, sys=0.26%, ctx=1924, majf=0, minf=3 00:14:06.301 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.301 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.301 job8: (groupid=0, jobs=1): err= 0: pid=71316: Mon Jul 22 17:18:25 2024 00:14:06.301 read: IOPS=63, BW=8121KiB/s (8315kB/s)(60.0MiB/7566msec) 00:14:06.301 slat (usec): min=5, max=890, avg=56.04, stdev=101.63 00:14:06.301 clat (usec): min=6530, max=60436, avg=14659.69, stdev=8474.97 00:14:06.301 lat (usec): min=6560, max=60445, avg=14715.73, stdev=8472.13 00:14:06.301 clat percentiles (usec): 00:14:06.301 | 1.00th=[ 6980], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 9110], 00:14:06.301 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[12125], 60.00th=[14484], 00:14:06.301 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20317], 95.00th=[28967], 00:14:06.301 | 99.00th=[57410], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:14:06.301 | 99.99th=[60556] 00:14:06.301 write: IOPS=69, BW=8943KiB/s (9158kB/s)(80.0MiB/9160msec); 0 zone resets 00:14:06.301 slat (usec): min=37, max=1979, avg=122.71, stdev=174.38 00:14:06.301 clat (msec): min=57, max=409, avg=113.67, stdev=54.72 00:14:06.301 lat (msec): min=57, max=410, avg=113.79, stdev=54.72 00:14:06.301 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 74], 00:14:06.302 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 101], 00:14:06.302 | 70.00th=[ 115], 80.00th=[ 148], 90.00th=[ 201], 95.00th=[ 230], 00:14:06.302 | 99.00th=[ 296], 99.50th=[ 359], 99.90th=[ 409], 99.95th=[ 409], 00:14:06.302 | 99.99th=[ 409] 00:14:06.302 bw ( KiB/s): min= 768, max=13312, per=0.82%, avg=8098.45, stdev=3937.01, samples=20 00:14:06.302 iops : min= 6, max= 104, avg=63.10, stdev=30.81, samples=20 00:14:06.302 lat (msec) : 10=13.04%, 20=25.09%, 50=3.84%, 100=34.82%, 250=21.34% 00:14:06.302 lat (msec) : 500=1.88% 00:14:06.302 cpu : usr=0.38%, sys=0.23%, ctx=1848, majf=0, minf=5 00:14:06.302 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.302 job9: (groupid=0, jobs=1): err= 0: pid=71444: Mon Jul 22 17:18:25 2024 00:14:06.302 read: IOPS=66, BW=8556KiB/s (8761kB/s)(60.0MiB/7181msec) 00:14:06.302 slat (usec): min=5, max=1352, avg=67.18, stdev=142.24 00:14:06.302 clat (msec): min=4, max=277, avg=25.67, stdev=41.51 00:14:06.302 lat (msec): min=4, max=277, avg=25.74, stdev=41.50 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:14:06.302 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:14:06.302 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 33], 95.00th=[ 100], 00:14:06.302 | 99.00th=[ 253], 99.50th=[ 264], 99.90th=[ 279], 99.95th=[ 279], 00:14:06.302 | 99.99th=[ 279] 00:14:06.302 write: IOPS=60, BW=7759KiB/s (7945kB/s)(64.4MiB/8496msec); 0 zone resets 00:14:06.302 slat (usec): min=36, max=4363, avg=135.48, stdev=237.48 00:14:06.302 clat (msec): min=60, max=462, avg=131.29, stdev=59.80 00:14:06.302 lat (msec): min=60, max=463, avg=131.43, stdev=59.81 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 87], 00:14:06.302 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 114], 60.00th=[ 125], 00:14:06.302 | 70.00th=[ 144], 80.00th=[ 167], 90.00th=[ 205], 95.00th=[ 262], 00:14:06.302 | 99.00th=[ 330], 99.50th=[ 409], 99.90th=[ 464], 99.95th=[ 464], 00:14:06.302 | 99.99th=[ 464] 00:14:06.302 bw ( KiB/s): min= 512, max=12263, per=0.70%, avg=6840.16, stdev=3255.02, samples=19 00:14:06.302 iops : min= 4, max= 95, avg=53.26, stdev=25.43, samples=19 00:14:06.302 lat (msec) : 10=7.54%, 20=28.34%, 50=8.64%, 100=19.80%, 250=32.16% 00:14:06.302 lat (msec) : 500=3.52% 00:14:06.302 cpu : usr=0.38%, sys=0.18%, ctx=1684, majf=0, minf=5 00:14:06.302 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 issued rwts: total=480,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.302 job10: (groupid=0, jobs=1): err= 0: pid=71455: Mon Jul 22 17:18:25 2024 00:14:06.302 read: IOPS=89, BW=11.1MiB/s (11.7MB/s)(100MiB/8970msec) 00:14:06.302 slat (usec): min=6, max=1215, avg=63.59, stdev=121.61 00:14:06.302 clat (usec): min=3111, max=70297, avg=13468.92, stdev=7982.71 00:14:06.302 lat (usec): min=3157, max=70316, avg=13532.51, stdev=7986.71 00:14:06.302 clat percentiles (usec): 00:14:06.302 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8717], 00:14:06.302 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11076], 60.00th=[12649], 00:14:06.302 | 70.00th=[14091], 80.00th=[16450], 90.00th=[21365], 95.00th=[26870], 00:14:06.302 | 99.00th=[49021], 99.50th=[66323], 99.90th=[70779], 99.95th=[70779], 00:14:06.302 | 99.99th=[70779] 00:14:06.302 write: IOPS=109, BW=13.6MiB/s (14.3MB/s)(118MiB/8687msec); 0 zone resets 00:14:06.302 slat (usec): min=35, max=20480, avg=182.04, stdev=809.71 00:14:06.302 clat (msec): min=5, max=255, avg=72.71, stdev=30.56 00:14:06.302 lat (msec): min=5, max=255, avg=72.89, stdev=30.56 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 13], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:14:06.302 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:14:06.302 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 113], 95.00th=[ 130], 00:14:06.302 | 99.00th=[ 176], 99.50th=[ 249], 99.90th=[ 255], 99.95th=[ 255], 00:14:06.302 | 99.99th=[ 255] 00:14:06.302 bw ( KiB/s): min= 2560, max=21248, per=1.22%, avg=12026.70, stdev=5537.02, samples=20 00:14:06.302 iops : min= 20, max= 166, avg=93.75, stdev=43.34, samples=20 00:14:06.302 lat (msec) : 4=0.11%, 10=17.57%, 20=24.27%, 50=8.19%, 100=41.79% 00:14:06.302 lat (msec) : 250=7.84%, 500=0.23% 00:14:06.302 cpu : usr=0.71%, sys=0.30%, ctx=2897, majf=0, minf=5 00:14:06.302 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 issued rwts: total=800,947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.302 job11: (groupid=0, jobs=1): err= 0: pid=71521: Mon Jul 22 17:18:25 2024 00:14:06.302 read: IOPS=79, BW=9.91MiB/s (10.4MB/s)(80.0MiB/8072msec) 00:14:06.302 slat (usec): min=5, max=2370, avg=55.65, stdev=136.19 00:14:06.302 clat (msec): min=3, max=200, avg=17.27, stdev=21.02 00:14:06.302 lat (msec): min=3, max=201, avg=17.32, stdev=21.03 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.302 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:14:06.302 | 70.00th=[ 16], 80.00th=[ 21], 90.00th=[ 36], 95.00th=[ 50], 00:14:06.302 | 99.00th=[ 120], 99.50th=[ 150], 99.90th=[ 201], 99.95th=[ 201], 00:14:06.302 | 99.99th=[ 201] 00:14:06.302 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(98.8MiB/8641msec); 0 zone resets 00:14:06.302 slat (usec): min=35, max=5148, avg=148.14, stdev=281.34 00:14:06.302 clat (msec): min=44, max=270, avg=87.02, stdev=35.66 00:14:06.302 lat (msec): min=45, max=270, avg=87.17, stdev=35.66 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 57], 00:14:06.302 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 91], 00:14:06.302 | 70.00th=[ 99], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 155], 00:14:06.302 | 99.00th=[ 218], 99.50th=[ 247], 99.90th=[ 271], 99.95th=[ 271], 00:14:06.302 | 99.99th=[ 271] 00:14:06.302 bw ( KiB/s): min= 2048, max=17408, per=1.02%, avg=10013.35, stdev=4577.39, samples=20 00:14:06.302 iops : min= 16, max= 136, avg=78.10, stdev=35.71, samples=20 00:14:06.302 lat (msec) : 4=0.35%, 10=20.35%, 20=15.10%, 50=9.93%, 100=38.74% 00:14:06.302 lat (msec) : 250=15.31%, 500=0.21% 00:14:06.302 cpu : usr=0.44%, sys=0.36%, ctx=2448, majf=0, minf=3 00:14:06.302 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 issued rwts: total=640,790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.302 job12: (groupid=0, jobs=1): err= 0: pid=71522: Mon Jul 22 17:18:25 2024 00:14:06.302 read: IOPS=88, BW=11.1MiB/s (11.7MB/s)(100MiB/8991msec) 00:14:06.302 slat (usec): min=6, max=1551, avg=46.15, stdev=98.86 00:14:06.302 clat (usec): min=3593, max=83305, avg=14203.87, stdev=9051.59 00:14:06.302 lat (usec): min=3608, max=83328, avg=14250.02, stdev=9049.83 00:14:06.302 clat percentiles (usec): 00:14:06.302 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8848], 00:14:06.302 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11338], 60.00th=[12911], 00:14:06.302 | 70.00th=[14746], 80.00th=[17433], 90.00th=[20841], 95.00th=[28705], 00:14:06.302 | 99.00th=[62129], 99.50th=[63701], 99.90th=[83362], 99.95th=[83362], 00:14:06.302 | 99.99th=[83362] 00:14:06.302 write: IOPS=107, BW=13.5MiB/s (14.1MB/s)(116MiB/8613msec); 0 zone resets 00:14:06.302 slat (usec): min=36, max=6527, avg=144.34, stdev=298.48 00:14:06.302 clat (msec): min=20, max=350, avg=73.59, stdev=36.37 00:14:06.302 lat (msec): min=20, max=351, avg=73.73, stdev=36.38 00:14:06.302 clat percentiles (msec): 00:14:06.302 | 1.00th=[ 24], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:14:06.302 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:14:06.302 | 70.00th=[ 77], 80.00th=[ 89], 90.00th=[ 112], 95.00th=[ 136], 00:14:06.302 | 99.00th=[ 234], 99.50th=[ 271], 99.90th=[ 351], 99.95th=[ 351], 00:14:06.302 | 99.99th=[ 351] 00:14:06.302 bw ( KiB/s): min= 512, max=20439, per=1.20%, avg=11785.80, stdev=5989.30, samples=20 00:14:06.302 iops : min= 4, max= 159, avg=91.95, stdev=46.76, samples=20 00:14:06.302 lat (msec) : 4=0.06%, 10=15.28%, 20=25.98%, 50=10.59%, 100=40.86% 00:14:06.302 lat (msec) : 250=6.83%, 500=0.41% 00:14:06.302 cpu : usr=0.65%, sys=0.35%, ctx=2728, majf=0, minf=3 00:14:06.302 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.302 issued rwts: total=800,928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.302 job13: (groupid=0, jobs=1): err= 0: pid=71523: Mon Jul 22 17:18:25 2024 00:14:06.302 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(100MiB/9018msec) 00:14:06.302 slat (usec): min=6, max=3705, avg=59.10, stdev=181.20 00:14:06.302 clat (usec): min=4363, max=33207, avg=11453.38, stdev=4174.73 00:14:06.302 lat (usec): min=4397, max=33227, avg=11512.48, stdev=4180.45 00:14:06.302 clat percentiles (usec): 00:14:06.302 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8356], 00:14:06.302 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11076], 00:14:06.303 | 70.00th=[12256], 80.00th=[13960], 90.00th=[16712], 95.00th=[20579], 00:14:06.303 | 99.00th=[25035], 99.50th=[28443], 99.90th=[33162], 99.95th=[33162], 00:14:06.303 | 99.99th=[33162] 00:14:06.303 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(117MiB/8884msec); 0 zone resets 00:14:06.303 slat (usec): min=31, max=6369, avg=131.68, stdev=263.67 00:14:06.303 clat (msec): min=4, max=273, avg=75.31, stdev=35.90 00:14:06.303 lat (msec): min=4, max=273, avg=75.44, stdev=35.90 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 13], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.303 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 70], 00:14:06.303 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 114], 95.00th=[ 148], 00:14:06.303 | 99.00th=[ 220], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 275], 00:14:06.303 | 99.99th=[ 275] 00:14:06.303 bw ( KiB/s): min= 2810, max=23040, per=1.21%, avg=11864.30, stdev=5469.35, samples=20 00:14:06.303 iops : min= 21, max= 180, avg=92.55, stdev=42.87, samples=20 00:14:06.303 lat (msec) : 10=20.07%, 20=24.34%, 50=7.67%, 100=39.56%, 250=8.13% 00:14:06.303 lat (msec) : 500=0.23% 00:14:06.303 cpu : usr=0.56%, sys=0.42%, ctx=2854, majf=0, minf=1 00:14:06.303 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 issued rwts: total=800,934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.303 job14: (groupid=0, jobs=1): err= 0: pid=71524: Mon Jul 22 17:18:25 2024 00:14:06.303 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8763msec) 00:14:06.303 slat (usec): min=7, max=1408, avg=58.30, stdev=108.17 00:14:06.303 clat (usec): min=4262, max=53302, avg=12584.11, stdev=5909.30 00:14:06.303 lat (usec): min=4313, max=53313, avg=12642.41, stdev=5907.04 00:14:06.303 clat percentiles (usec): 00:14:06.303 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 8225], 00:14:06.303 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11731], 60.00th=[13042], 00:14:06.303 | 70.00th=[14353], 80.00th=[15926], 90.00th=[19268], 95.00th=[21890], 00:14:06.303 | 99.00th=[33424], 99.50th=[40633], 99.90th=[53216], 99.95th=[53216], 00:14:06.303 | 99.99th=[53216] 00:14:06.303 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(115MiB/8770msec); 0 zone resets 00:14:06.303 slat (usec): min=31, max=4029, avg=129.83, stdev=201.39 00:14:06.303 clat (msec): min=38, max=276, avg=75.44, stdev=35.70 00:14:06.303 lat (msec): min=38, max=276, avg=75.57, stdev=35.70 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:14:06.303 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:14:06.303 | 70.00th=[ 77], 80.00th=[ 94], 90.00th=[ 123], 95.00th=[ 153], 00:14:06.303 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 275], 00:14:06.303 | 99.99th=[ 275] 00:14:06.303 bw ( KiB/s): min= 1024, max=18432, per=1.18%, avg=11615.30, stdev=5527.38, samples=20 00:14:06.303 iops : min= 8, max= 144, avg=90.55, stdev=43.18, samples=20 00:14:06.303 lat (msec) : 10=17.02%, 20=25.87%, 50=9.67%, 100=37.94%, 250=9.27% 00:14:06.303 lat (msec) : 500=0.23% 00:14:06.303 cpu : usr=0.67%, sys=0.35%, ctx=2851, majf=0, minf=1 00:14:06.303 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 issued rwts: total=800,916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.303 job15: (groupid=0, jobs=1): err= 0: pid=71526: Mon Jul 22 17:18:25 2024 00:14:06.303 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8286msec) 00:14:06.303 slat (usec): min=6, max=1810, avg=49.24, stdev=120.95 00:14:06.303 clat (usec): min=3220, max=53770, avg=10572.78, stdev=6095.27 00:14:06.303 lat (usec): min=3909, max=53999, avg=10622.01, stdev=6106.76 00:14:06.303 clat percentiles (usec): 00:14:06.303 | 1.00th=[ 4080], 5.00th=[ 4424], 10.00th=[ 5145], 20.00th=[ 6128], 00:14:06.303 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[10159], 00:14:06.303 | 70.00th=[11863], 80.00th=[14222], 90.00th=[17695], 95.00th=[21365], 00:14:06.303 | 99.00th=[32375], 99.50th=[35390], 99.90th=[53740], 99.95th=[53740], 00:14:06.303 | 99.99th=[53740] 00:14:06.303 write: IOPS=91, BW=11.5MiB/s (12.0MB/s)(103MiB/8977msec); 0 zone resets 00:14:06.303 slat (usec): min=35, max=11686, avg=147.23, stdev=449.54 00:14:06.303 clat (msec): min=46, max=273, avg=86.44, stdev=35.18 00:14:06.303 lat (msec): min=46, max=273, avg=86.58, stdev=35.18 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 56], 00:14:06.303 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 88], 00:14:06.303 | 70.00th=[ 99], 80.00th=[ 115], 90.00th=[ 130], 95.00th=[ 146], 00:14:06.303 | 99.00th=[ 213], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:14:06.303 | 99.99th=[ 275] 00:14:06.303 bw ( KiB/s): min= 3328, max=17152, per=1.06%, avg=10431.90, stdev=4218.05, samples=20 00:14:06.303 iops : min= 26, max= 134, avg=81.40, stdev=32.93, samples=20 00:14:06.303 lat (msec) : 4=0.18%, 10=28.90%, 20=16.51%, 50=4.25%, 100=35.55% 00:14:06.303 lat (msec) : 250=14.48%, 500=0.12% 00:14:06.303 cpu : usr=0.61%, sys=0.26%, ctx=2804, majf=0, minf=7 00:14:06.303 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 issued rwts: total=800,823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.303 job16: (groupid=0, jobs=1): err= 0: pid=71527: Mon Jul 22 17:18:25 2024 00:14:06.303 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8779msec) 00:14:06.303 slat (usec): min=5, max=1359, avg=54.48, stdev=117.25 00:14:06.303 clat (usec): min=6677, max=36147, avg=13578.84, stdev=4539.20 00:14:06.303 lat (usec): min=6833, max=36158, avg=13633.33, stdev=4533.32 00:14:06.303 clat percentiles (usec): 00:14:06.303 | 1.00th=[ 7111], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:14:06.303 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:14:06.303 | 70.00th=[14091], 80.00th=[16909], 90.00th=[19792], 95.00th=[21627], 00:14:06.303 | 99.00th=[30540], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:14:06.303 | 99.99th=[35914] 00:14:06.303 write: IOPS=107, BW=13.4MiB/s (14.1MB/s)(117MiB/8671msec); 0 zone resets 00:14:06.303 slat (usec): min=37, max=1766, avg=137.80, stdev=185.63 00:14:06.303 clat (msec): min=32, max=225, avg=73.48, stdev=30.20 00:14:06.303 lat (msec): min=32, max=225, avg=73.61, stdev=30.20 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:14:06.303 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 69], 00:14:06.303 | 70.00th=[ 77], 80.00th=[ 91], 90.00th=[ 112], 95.00th=[ 136], 00:14:06.303 | 99.00th=[ 190], 99.50th=[ 211], 99.90th=[ 226], 99.95th=[ 226], 00:14:06.303 | 99.99th=[ 226] 00:14:06.303 bw ( KiB/s): min= 2816, max=18981, per=1.20%, avg=11815.85, stdev=5346.57, samples=20 00:14:06.303 iops : min= 22, max= 148, avg=91.95, stdev=41.94, samples=20 00:14:06.303 lat (msec) : 10=8.08%, 20=33.60%, 50=9.64%, 100=40.88%, 250=7.79% 00:14:06.303 cpu : usr=0.76%, sys=0.30%, ctx=2771, majf=0, minf=7 00:14:06.303 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 issued rwts: total=800,932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.303 job17: (groupid=0, jobs=1): err= 0: pid=71528: Mon Jul 22 17:18:25 2024 00:14:06.303 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(100MiB/8689msec) 00:14:06.303 slat (usec): min=6, max=910, avg=52.27, stdev=90.91 00:14:06.303 clat (usec): min=3942, max=99673, avg=16224.55, stdev=12228.75 00:14:06.303 lat (usec): min=3967, max=99695, avg=16276.81, stdev=12226.32 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:14:06.303 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:14:06.303 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 34], 00:14:06.303 | 99.00th=[ 79], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 101], 00:14:06.303 | 99.99th=[ 101] 00:14:06.303 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(106MiB/8404msec); 0 zone resets 00:14:06.303 slat (usec): min=32, max=7890, avg=133.77, stdev=330.49 00:14:06.303 clat (msec): min=22, max=246, avg=78.71, stdev=33.86 00:14:06.303 lat (msec): min=23, max=246, avg=78.85, stdev=33.83 00:14:06.303 clat percentiles (msec): 00:14:06.303 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.303 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 75], 00:14:06.303 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 126], 95.00th=[ 142], 00:14:06.303 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 247], 99.95th=[ 247], 00:14:06.303 | 99.99th=[ 247] 00:14:06.303 bw ( KiB/s): min= 3840, max=18468, per=1.09%, avg=10725.55, stdev=4472.48, samples=20 00:14:06.303 iops : min= 30, max= 144, avg=83.70, stdev=34.88, samples=20 00:14:06.303 lat (msec) : 4=0.12%, 10=13.43%, 20=25.76%, 50=12.45%, 100=36.94% 00:14:06.303 lat (msec) : 250=11.30% 00:14:06.303 cpu : usr=0.55%, sys=0.36%, ctx=2700, majf=0, minf=3 00:14:06.303 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.303 issued rwts: total=800,846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.303 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.304 job18: (groupid=0, jobs=1): err= 0: pid=71529: Mon Jul 22 17:18:25 2024 00:14:06.304 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8532msec) 00:14:06.304 slat (usec): min=6, max=2517, avg=59.22, stdev=171.09 00:14:06.304 clat (usec): min=1289, max=110090, avg=13337.66, stdev=10943.88 00:14:06.304 lat (msec): min=3, max=110, avg=13.40, stdev=10.94 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.304 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.304 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 25], 00:14:06.304 | 99.00th=[ 61], 99.50th=[ 97], 99.90th=[ 110], 99.95th=[ 110], 00:14:06.304 | 99.99th=[ 110] 00:14:06.304 write: IOPS=93, BW=11.6MiB/s (12.2MB/s)(101MiB/8706msec); 0 zone resets 00:14:06.304 slat (usec): min=36, max=2631, avg=134.68, stdev=198.68 00:14:06.304 clat (msec): min=14, max=257, avg=85.29, stdev=33.00 00:14:06.304 lat (msec): min=14, max=257, avg=85.43, stdev=33.02 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 31], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 56], 00:14:06.304 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 90], 00:14:06.304 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 127], 95.00th=[ 144], 00:14:06.304 | 99.00th=[ 182], 99.50th=[ 215], 99.90th=[ 257], 99.95th=[ 257], 00:14:06.304 | 99.99th=[ 257] 00:14:06.304 bw ( KiB/s): min= 2816, max=19968, per=1.05%, avg=10262.85, stdev=4471.68, samples=20 00:14:06.304 iops : min= 22, max= 156, avg=80.10, stdev=34.98, samples=20 00:14:06.304 lat (msec) : 2=0.06%, 4=0.25%, 10=19.01%, 20=25.47%, 50=8.51% 00:14:06.304 lat (msec) : 100=32.42%, 250=14.22%, 500=0.06% 00:14:06.304 cpu : usr=0.53%, sys=0.37%, ctx=2630, majf=0, minf=1 00:14:06.304 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 issued rwts: total=800,810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.304 job19: (groupid=0, jobs=1): err= 0: pid=71530: Mon Jul 22 17:18:25 2024 00:14:06.304 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8611msec) 00:14:06.304 slat (usec): min=5, max=3731, avg=63.16, stdev=166.52 00:14:06.304 clat (msec): min=5, max=107, avg=13.80, stdev= 9.38 00:14:06.304 lat (msec): min=5, max=107, avg=13.86, stdev= 9.38 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:06.304 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.304 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 19], 95.00th=[ 22], 00:14:06.304 | 99.00th=[ 50], 99.50th=[ 91], 99.90th=[ 108], 99.95th=[ 108], 00:14:06.304 | 99.99th=[ 108] 00:14:06.304 write: IOPS=106, BW=13.3MiB/s (13.9MB/s)(115MiB/8644msec); 0 zone resets 00:14:06.304 slat (usec): min=37, max=1718, avg=119.84, stdev=149.27 00:14:06.304 clat (msec): min=43, max=257, avg=74.54, stdev=32.30 00:14:06.304 lat (msec): min=43, max=257, avg=74.66, stdev=32.30 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.304 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 00:14:06.304 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 114], 95.00th=[ 138], 00:14:06.304 | 99.00th=[ 213], 99.50th=[ 239], 99.90th=[ 257], 99.95th=[ 257], 00:14:06.304 | 99.99th=[ 257] 00:14:06.304 bw ( KiB/s): min= 3584, max=17699, per=1.19%, avg=11646.00, stdev=4945.10, samples=20 00:14:06.304 iops : min= 28, max= 138, avg=90.80, stdev=38.59, samples=20 00:14:06.304 lat (msec) : 10=12.75%, 20=30.62%, 50=7.80%, 100=41.33%, 250=7.33% 00:14:06.304 lat (msec) : 500=0.17% 00:14:06.304 cpu : usr=0.63%, sys=0.34%, ctx=2867, majf=0, minf=1 00:14:06.304 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 issued rwts: total=800,918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.304 job20: (groupid=0, jobs=1): err= 0: pid=71531: Mon Jul 22 17:18:25 2024 00:14:06.304 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(106MiB/9185msec) 00:14:06.304 slat (usec): min=7, max=1630, avg=69.85, stdev=141.90 00:14:06.304 clat (msec): min=2, max=158, avg=15.84, stdev=17.91 00:14:06.304 lat (msec): min=3, max=158, avg=15.91, stdev=17.92 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.304 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.304 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 34], 00:14:06.304 | 99.00th=[ 108], 99.50th=[ 127], 99.90th=[ 159], 99.95th=[ 159], 00:14:06.304 | 99.99th=[ 159] 00:14:06.304 write: IOPS=115, BW=14.4MiB/s (15.1MB/s)(120MiB/8311msec); 0 zone resets 00:14:06.304 slat (usec): min=37, max=4158, avg=126.33, stdev=205.77 00:14:06.304 clat (usec): min=981, max=234630, avg=68671.21, stdev=33589.41 00:14:06.304 lat (usec): min=1093, max=234696, avg=68797.54, stdev=33599.89 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 47], 20.00th=[ 49], 00:14:06.304 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:14:06.304 | 70.00th=[ 71], 80.00th=[ 87], 90.00th=[ 120], 95.00th=[ 136], 00:14:06.304 | 99.00th=[ 188], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 234], 00:14:06.304 | 99.99th=[ 234] 00:14:06.304 bw ( KiB/s): min= 2048, max=27959, per=1.26%, avg=12398.05, stdev=6666.71, samples=19 00:14:06.304 iops : min= 16, max= 218, avg=96.79, stdev=52.08, samples=19 00:14:06.304 lat (usec) : 1000=0.06% 00:14:06.304 lat (msec) : 4=0.22%, 10=20.97%, 20=20.42%, 50=16.38%, 100=32.82% 00:14:06.304 lat (msec) : 250=9.13% 00:14:06.304 cpu : usr=0.66%, sys=0.37%, ctx=2923, majf=0, minf=1 00:14:06.304 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 issued rwts: total=847,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.304 job21: (groupid=0, jobs=1): err= 0: pid=71532: Mon Jul 22 17:18:25 2024 00:14:06.304 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8793msec) 00:14:06.304 slat (usec): min=6, max=1024, avg=56.44, stdev=112.05 00:14:06.304 clat (usec): min=5716, max=72639, avg=16527.67, stdev=9155.70 00:14:06.304 lat (usec): min=5926, max=72657, avg=16584.11, stdev=9153.27 00:14:06.304 clat percentiles (usec): 00:14:06.304 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9896], 00:14:06.304 | 30.00th=[11076], 40.00th=[12911], 50.00th=[14353], 60.00th=[15926], 00:14:06.304 | 70.00th=[17957], 80.00th=[19792], 90.00th=[27395], 95.00th=[36439], 00:14:06.304 | 99.00th=[51643], 99.50th=[62129], 99.90th=[72877], 99.95th=[72877], 00:14:06.304 | 99.99th=[72877] 00:14:06.304 write: IOPS=111, BW=13.9MiB/s (14.6MB/s)(117MiB/8365msec); 0 zone resets 00:14:06.304 slat (usec): min=37, max=1840, avg=120.83, stdev=164.41 00:14:06.304 clat (msec): min=36, max=229, avg=70.96, stdev=30.01 00:14:06.304 lat (msec): min=36, max=229, avg=71.08, stdev=30.02 00:14:06.304 clat percentiles (msec): 00:14:06.304 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:14:06.304 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:14:06.304 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 138], 00:14:06.304 | 99.00th=[ 199], 99.50th=[ 218], 99.90th=[ 230], 99.95th=[ 230], 00:14:06.304 | 99.99th=[ 230] 00:14:06.304 bw ( KiB/s): min= 1792, max=18981, per=1.21%, avg=11844.25, stdev=5542.93, samples=20 00:14:06.304 iops : min= 14, max= 148, avg=92.35, stdev=43.21, samples=20 00:14:06.304 lat (msec) : 10=9.52%, 20=27.64%, 50=17.54%, 100=38.95%, 250=6.35% 00:14:06.304 cpu : usr=0.71%, sys=0.28%, ctx=2821, majf=0, minf=1 00:14:06.304 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.304 issued rwts: total=800,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.304 job22: (groupid=0, jobs=1): err= 0: pid=71533: Mon Jul 22 17:18:25 2024 00:14:06.304 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8285msec) 00:14:06.304 slat (usec): min=5, max=1334, avg=55.16, stdev=118.48 00:14:06.304 clat (msec): min=3, max=243, avg=16.59, stdev=27.15 00:14:06.304 lat (msec): min=3, max=243, avg=16.64, stdev=27.15 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.305 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:14:06.305 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 44], 00:14:06.305 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 245], 99.95th=[ 245], 00:14:06.305 | 99.99th=[ 245] 00:14:06.305 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(100MiB/8328msec); 0 zone resets 00:14:06.305 slat (usec): min=38, max=2694, avg=131.58, stdev=181.34 00:14:06.305 clat (msec): min=44, max=397, avg=82.55, stdev=40.86 00:14:06.305 lat (msec): min=45, max=397, avg=82.68, stdev=40.87 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 54], 00:14:06.305 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 77], 00:14:06.305 | 70.00th=[ 89], 80.00th=[ 111], 90.00th=[ 131], 95.00th=[ 148], 00:14:06.305 | 99.00th=[ 215], 99.50th=[ 326], 99.90th=[ 397], 99.95th=[ 397], 00:14:06.305 | 99.99th=[ 397] 00:14:06.305 bw ( KiB/s): min= 1026, max=18432, per=1.06%, avg=10371.42, stdev=5622.05, samples=19 00:14:06.305 iops : min= 8, max= 144, avg=80.89, stdev=43.85, samples=19 00:14:06.305 lat (msec) : 4=0.12%, 10=24.19%, 20=20.25%, 50=8.31%, 100=32.94% 00:14:06.305 lat (msec) : 250=13.69%, 500=0.50% 00:14:06.305 cpu : usr=0.59%, sys=0.31%, ctx=2595, majf=0, minf=1 00:14:06.305 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 issued rwts: total=800,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.305 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.305 job23: (groupid=0, jobs=1): err= 0: pid=71534: Mon Jul 22 17:18:25 2024 00:14:06.305 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8823msec) 00:14:06.305 slat (usec): min=6, max=1789, avg=60.39, stdev=133.15 00:14:06.305 clat (msec): min=6, max=158, avg=16.32, stdev=14.68 00:14:06.305 lat (msec): min=6, max=158, avg=16.38, stdev=14.68 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:14:06.305 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:14:06.305 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 29], 00:14:06.305 | 99.00th=[ 77], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:14:06.305 | 99.99th=[ 159] 00:14:06.305 write: IOPS=113, BW=14.1MiB/s (14.8MB/s)(119MiB/8387msec); 0 zone resets 00:14:06.305 slat (usec): min=36, max=4808, avg=125.09, stdev=241.51 00:14:06.305 clat (msec): min=37, max=260, avg=70.12, stdev=31.16 00:14:06.305 lat (msec): min=37, max=260, avg=70.25, stdev=31.16 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:14:06.305 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 64], 00:14:06.305 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 129], 00:14:06.305 | 99.00th=[ 205], 99.50th=[ 224], 99.90th=[ 262], 99.95th=[ 262], 00:14:06.305 | 99.99th=[ 262] 00:14:06.305 bw ( KiB/s): min= 2048, max=19456, per=1.23%, avg=12036.05, stdev=5786.81, samples=20 00:14:06.305 iops : min= 16, max= 152, avg=93.85, stdev=45.18, samples=20 00:14:06.305 lat (msec) : 10=8.81%, 20=29.92%, 50=17.62%, 100=37.01%, 250=6.46% 00:14:06.305 lat (msec) : 500=0.17% 00:14:06.305 cpu : usr=0.57%, sys=0.41%, ctx=2853, majf=0, minf=3 00:14:06.305 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 issued rwts: total=800,948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.305 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.305 job24: (groupid=0, jobs=1): err= 0: pid=71535: Mon Jul 22 17:18:25 2024 00:14:06.305 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8788msec) 00:14:06.305 slat (usec): min=7, max=1875, avg=71.06, stdev=164.34 00:14:06.305 clat (usec): min=7047, max=80176, avg=16605.38, stdev=8767.52 00:14:06.305 lat (usec): min=7065, max=80188, avg=16676.44, stdev=8755.17 00:14:06.305 clat percentiles (usec): 00:14:06.305 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10945], 00:14:06.305 | 30.00th=[12125], 40.00th=[13173], 50.00th=[14091], 60.00th=[15270], 00:14:06.305 | 70.00th=[16909], 80.00th=[19792], 90.00th=[26608], 95.00th=[31589], 00:14:06.305 | 99.00th=[50594], 99.50th=[54789], 99.90th=[80217], 99.95th=[80217], 00:14:06.305 | 99.99th=[80217] 00:14:06.305 write: IOPS=112, BW=14.1MiB/s (14.7MB/s)(117MiB/8353msec); 0 zone resets 00:14:06.305 slat (usec): min=36, max=1388, avg=128.35, stdev=158.16 00:14:06.305 clat (msec): min=31, max=232, avg=70.39, stdev=28.46 00:14:06.305 lat (msec): min=31, max=232, avg=70.51, stdev=28.48 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 52], 00:14:06.305 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:14:06.305 | 70.00th=[ 73], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 123], 00:14:06.305 | 99.00th=[ 190], 99.50th=[ 209], 99.90th=[ 232], 99.95th=[ 232], 00:14:06.305 | 99.99th=[ 232] 00:14:06.305 bw ( KiB/s): min= 512, max=18650, per=1.21%, avg=11911.05, stdev=5832.20, samples=20 00:14:06.305 iops : min= 4, max= 145, avg=92.75, stdev=45.63, samples=20 00:14:06.305 lat (msec) : 10=5.69%, 20=31.45%, 50=17.48%, 100=38.76%, 250=6.61% 00:14:06.305 cpu : usr=0.69%, sys=0.34%, ctx=2911, majf=0, minf=5 00:14:06.305 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 issued rwts: total=800,939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.305 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.305 job25: (groupid=0, jobs=1): err= 0: pid=71536: Mon Jul 22 17:18:25 2024 00:14:06.305 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8722msec) 00:14:06.305 slat (usec): min=5, max=1629, avg=54.15, stdev=114.18 00:14:06.305 clat (msec): min=3, max=120, avg=17.89, stdev=19.55 00:14:06.305 lat (msec): min=3, max=120, avg=17.94, stdev=19.55 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.305 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.305 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 32], 95.00th=[ 58], 00:14:06.305 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:14:06.305 | 99.99th=[ 122] 00:14:06.305 write: IOPS=100, BW=12.5MiB/s (13.1MB/s)(103MiB/8226msec); 0 zone resets 00:14:06.305 slat (usec): min=30, max=5311, avg=146.64, stdev=296.00 00:14:06.305 clat (msec): min=38, max=257, avg=79.03, stdev=35.03 00:14:06.305 lat (msec): min=38, max=257, avg=79.18, stdev=35.05 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 52], 00:14:06.305 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 77], 00:14:06.305 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 123], 95.00th=[ 144], 00:14:06.305 | 99.00th=[ 215], 99.50th=[ 228], 99.90th=[ 257], 99.95th=[ 257], 00:14:06.305 | 99.99th=[ 257] 00:14:06.305 bw ( KiB/s): min= 2560, max=18139, per=1.07%, avg=10462.75, stdev=4708.73, samples=20 00:14:06.305 iops : min= 20, max= 141, avg=81.55, stdev=36.82, samples=20 00:14:06.305 lat (msec) : 4=0.62%, 10=16.62%, 20=22.46%, 50=13.85%, 100=35.20% 00:14:06.305 lat (msec) : 250=11.20%, 500=0.06% 00:14:06.305 cpu : usr=0.47%, sys=0.44%, ctx=2709, majf=0, minf=5 00:14:06.305 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.305 issued rwts: total=800,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.305 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.305 job26: (groupid=0, jobs=1): err= 0: pid=71537: Mon Jul 22 17:18:25 2024 00:14:06.305 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(90.5MiB/8291msec) 00:14:06.305 slat (usec): min=5, max=1728, avg=53.15, stdev=123.19 00:14:06.305 clat (msec): min=2, max=305, avg=16.95, stdev=33.90 00:14:06.305 lat (msec): min=2, max=305, avg=17.00, stdev=33.90 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.305 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:14:06.305 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 25], 95.00th=[ 39], 00:14:06.305 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:14:06.305 | 99.99th=[ 305] 00:14:06.305 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8455msec); 0 zone resets 00:14:06.305 slat (usec): min=30, max=3516, avg=142.15, stdev=244.29 00:14:06.305 clat (msec): min=45, max=270, avg=83.89, stdev=37.03 00:14:06.305 lat (msec): min=46, max=270, avg=84.03, stdev=37.04 00:14:06.305 clat percentiles (msec): 00:14:06.305 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 53], 20.00th=[ 56], 00:14:06.305 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 80], 00:14:06.305 | 70.00th=[ 91], 80.00th=[ 115], 90.00th=[ 138], 95.00th=[ 153], 00:14:06.305 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 271], 99.95th=[ 271], 00:14:06.305 | 99.99th=[ 271] 00:14:06.305 bw ( KiB/s): min= 1536, max=17920, per=1.06%, avg=10359.63, stdev=4968.82, samples=19 00:14:06.305 iops : min= 12, max= 140, avg=80.79, stdev=38.82, samples=19 00:14:06.305 lat (msec) : 4=0.33%, 10=24.08%, 20=17.65%, 50=7.15%, 100=36.61% 00:14:06.305 lat (msec) : 250=13.45%, 500=0.72% 00:14:06.305 cpu : usr=0.50%, sys=0.36%, ctx=2551, majf=0, minf=3 00:14:06.305 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 issued rwts: total=724,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.306 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.306 job27: (groupid=0, jobs=1): err= 0: pid=71538: Mon Jul 22 17:18:25 2024 00:14:06.306 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8806msec) 00:14:06.306 slat (usec): min=5, max=1295, avg=60.38, stdev=119.28 00:14:06.306 clat (usec): min=3158, max=62354, avg=15354.54, stdev=9204.84 00:14:06.306 lat (usec): min=3342, max=62360, avg=15414.92, stdev=9197.73 00:14:06.306 clat percentiles (usec): 00:14:06.306 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 8029], 00:14:06.306 | 30.00th=[ 9372], 40.00th=[11207], 50.00th=[13829], 60.00th=[15008], 00:14:06.306 | 70.00th=[16909], 80.00th=[20841], 90.00th=[27657], 95.00th=[31851], 00:14:06.306 | 99.00th=[49546], 99.50th=[55313], 99.90th=[62129], 99.95th=[62129], 00:14:06.306 | 99.99th=[62129] 00:14:06.306 write: IOPS=103, BW=12.9MiB/s (13.6MB/s)(110MiB/8489msec); 0 zone resets 00:14:06.306 slat (usec): min=37, max=4553, avg=145.53, stdev=242.80 00:14:06.306 clat (msec): min=18, max=324, avg=76.37, stdev=38.52 00:14:06.306 lat (msec): min=18, max=324, avg=76.52, stdev=38.53 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:14:06.306 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 71], 00:14:06.306 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 116], 95.00th=[ 150], 00:14:06.306 | 99.00th=[ 232], 99.50th=[ 264], 99.90th=[ 326], 99.95th=[ 326], 00:14:06.306 | 99.99th=[ 326] 00:14:06.306 bw ( KiB/s): min= 2048, max=20264, per=1.14%, avg=11144.30, stdev=5459.97, samples=20 00:14:06.306 iops : min= 16, max= 158, avg=86.70, stdev=42.81, samples=20 00:14:06.306 lat (msec) : 4=0.18%, 10=15.78%, 20=21.74%, 50=19.95%, 100=33.06% 00:14:06.306 lat (msec) : 250=8.93%, 500=0.36% 00:14:06.306 cpu : usr=0.63%, sys=0.33%, ctx=2785, majf=0, minf=5 00:14:06.306 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 issued rwts: total=800,879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.306 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.306 job28: (groupid=0, jobs=1): err= 0: pid=71539: Mon Jul 22 17:18:25 2024 00:14:06.306 read: IOPS=106, BW=13.3MiB/s (13.9MB/s)(120MiB/9047msec) 00:14:06.306 slat (usec): min=6, max=2368, avg=56.33, stdev=146.49 00:14:06.306 clat (usec): min=3181, max=49097, avg=9642.98, stdev=5459.56 00:14:06.306 lat (usec): min=3209, max=49104, avg=9699.32, stdev=5458.09 00:14:06.306 clat percentiles (usec): 00:14:06.306 | 1.00th=[ 4146], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 6128], 00:14:06.306 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 8225], 60.00th=[ 9110], 00:14:06.306 | 70.00th=[10552], 80.00th=[11863], 90.00th=[15008], 95.00th=[17695], 00:14:06.306 | 99.00th=[36439], 99.50th=[42730], 99.90th=[49021], 99.95th=[49021], 00:14:06.306 | 99.99th=[49021] 00:14:06.306 write: IOPS=110, BW=13.8MiB/s (14.4MB/s)(122MiB/8878msec); 0 zone resets 00:14:06.306 slat (usec): min=38, max=2041, avg=141.33, stdev=210.86 00:14:06.306 clat (msec): min=6, max=214, avg=72.18, stdev=29.07 00:14:06.306 lat (msec): min=6, max=214, avg=72.32, stdev=29.08 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:14:06.306 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 68], 00:14:06.306 | 70.00th=[ 80], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 130], 00:14:06.306 | 99.00th=[ 153], 99.50th=[ 174], 99.90th=[ 215], 99.95th=[ 215], 00:14:06.306 | 99.99th=[ 215] 00:14:06.306 bw ( KiB/s): min= 5888, max=21760, per=1.26%, avg=12412.80, stdev=4711.20, samples=20 00:14:06.306 iops : min= 46, max= 170, avg=96.90, stdev=36.87, samples=20 00:14:06.306 lat (msec) : 4=0.31%, 10=32.78%, 20=15.33%, 50=10.64%, 100=31.75% 00:14:06.306 lat (msec) : 250=9.19% 00:14:06.306 cpu : usr=0.68%, sys=0.46%, ctx=3178, majf=0, minf=5 00:14:06.306 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 issued rwts: total=960,977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.306 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.306 job29: (groupid=0, jobs=1): err= 0: pid=71546: Mon Jul 22 17:18:25 2024 00:14:06.306 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(120MiB/8922msec) 00:14:06.306 slat (usec): min=7, max=1197, avg=59.44, stdev=119.38 00:14:06.306 clat (usec): min=5315, max=44427, avg=12339.11, stdev=5916.69 00:14:06.306 lat (usec): min=5383, max=44538, avg=12398.55, stdev=5912.34 00:14:06.306 clat percentiles (usec): 00:14:06.306 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8160], 00:14:06.306 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[11994], 00:14:06.306 | 70.00th=[13042], 80.00th=[15664], 90.00th=[19006], 95.00th=[23462], 00:14:06.306 | 99.00th=[36439], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:14:06.306 | 99.99th=[44303] 00:14:06.306 write: IOPS=113, BW=14.2MiB/s (14.9MB/s)(121MiB/8537msec); 0 zone resets 00:14:06.306 slat (usec): min=37, max=4927, avg=134.35, stdev=245.45 00:14:06.306 clat (msec): min=25, max=296, avg=69.90, stdev=29.63 00:14:06.306 lat (msec): min=25, max=296, avg=70.03, stdev=29.64 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:14:06.306 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:14:06.306 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 111], 95.00th=[ 129], 00:14:06.306 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 296], 99.95th=[ 296], 00:14:06.306 | 99.99th=[ 296] 00:14:06.306 bw ( KiB/s): min= 2565, max=19712, per=1.25%, avg=12273.20, stdev=5804.10, samples=20 00:14:06.306 iops : min= 20, max= 154, avg=95.60, stdev=45.29, samples=20 00:14:06.306 lat (msec) : 10=21.78%, 20=23.65%, 50=15.20%, 100=33.30%, 250=5.86% 00:14:06.306 lat (msec) : 500=0.21% 00:14:06.306 cpu : usr=0.65%, sys=0.43%, ctx=3130, majf=0, minf=5 00:14:06.306 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.306 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.306 job30: (groupid=0, jobs=1): err= 0: pid=71547: Mon Jul 22 17:18:25 2024 00:14:06.306 read: IOPS=69, BW=8879KiB/s (9092kB/s)(60.0MiB/6920msec) 00:14:06.306 slat (usec): min=6, max=637, avg=54.46, stdev=91.30 00:14:06.306 clat (msec): min=4, max=204, avg=19.72, stdev=25.87 00:14:06.306 lat (msec): min=4, max=204, avg=19.77, stdev=25.89 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:14:06.306 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:14:06.306 | 70.00th=[ 19], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 50], 00:14:06.306 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:14:06.306 | 99.99th=[ 205] 00:14:06.306 write: IOPS=58, BW=7490KiB/s (7669kB/s)(64.9MiB/8870msec); 0 zone resets 00:14:06.306 slat (usec): min=39, max=14187, avg=185.15, stdev=643.16 00:14:06.306 clat (msec): min=67, max=505, avg=135.75, stdev=65.35 00:14:06.306 lat (msec): min=67, max=505, avg=135.94, stdev=65.41 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 85], 00:14:06.306 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 121], 60.00th=[ 140], 00:14:06.306 | 70.00th=[ 148], 80.00th=[ 180], 90.00th=[ 220], 95.00th=[ 247], 00:14:06.306 | 99.00th=[ 430], 99.50th=[ 498], 99.90th=[ 506], 99.95th=[ 506], 00:14:06.306 | 99.99th=[ 506] 00:14:06.306 bw ( KiB/s): min= 768, max=12544, per=0.67%, avg=6537.45, stdev=3192.68, samples=20 00:14:06.306 iops : min= 6, max= 98, avg=50.90, stdev=25.00, samples=20 00:14:06.306 lat (msec) : 10=13.21%, 20=21.22%, 50=11.31%, 100=21.82%, 250=30.13% 00:14:06.306 lat (msec) : 500=2.10%, 750=0.20% 00:14:06.306 cpu : usr=0.38%, sys=0.20%, ctx=1690, majf=0, minf=9 00:14:06.306 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 issued rwts: total=480,519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.306 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.306 job31: (groupid=0, jobs=1): err= 0: pid=71548: Mon Jul 22 17:18:25 2024 00:14:06.306 read: IOPS=57, BW=7336KiB/s (7512kB/s)(60.0MiB/8375msec) 00:14:06.306 slat (usec): min=6, max=1672, avg=72.70, stdev=154.71 00:14:06.306 clat (usec): min=8158, max=71555, avg=20124.72, stdev=9365.58 00:14:06.306 lat (usec): min=8518, max=71562, avg=20197.42, stdev=9351.44 00:14:06.306 clat percentiles (usec): 00:14:06.306 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[12518], 20.00th=[13304], 00:14:06.306 | 30.00th=[14091], 40.00th=[16450], 50.00th=[17695], 60.00th=[19530], 00:14:06.306 | 70.00th=[22414], 80.00th=[24773], 90.00th=[27919], 95.00th=[41681], 00:14:06.306 | 99.00th=[64226], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:14:06.306 | 99.99th=[71828] 00:14:06.306 write: IOPS=72, BW=9327KiB/s (9551kB/s)(80.0MiB/8783msec); 0 zone resets 00:14:06.306 slat (usec): min=36, max=3654, avg=143.87, stdev=243.64 00:14:06.306 clat (msec): min=13, max=563, avg=108.96, stdev=60.25 00:14:06.306 lat (msec): min=13, max=563, avg=109.10, stdev=60.25 00:14:06.306 clat percentiles (msec): 00:14:06.306 | 1.00th=[ 21], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:14:06.306 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 94], 00:14:06.306 | 70.00th=[ 105], 80.00th=[ 124], 90.00th=[ 190], 95.00th=[ 230], 00:14:06.306 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 567], 99.95th=[ 567], 00:14:06.306 | 99.99th=[ 567] 00:14:06.306 bw ( KiB/s): min= 256, max=13568, per=0.83%, avg=8177.74, stdev=4463.13, samples=19 00:14:06.306 iops : min= 2, max= 106, avg=63.84, stdev=34.92, samples=19 00:14:06.306 lat (msec) : 10=0.54%, 20=26.61%, 50=15.71%, 100=37.95%, 250=17.32% 00:14:06.306 lat (msec) : 500=1.79%, 750=0.09% 00:14:06.306 cpu : usr=0.45%, sys=0.23%, ctx=1843, majf=0, minf=6 00:14:06.306 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.306 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.307 job32: (groupid=0, jobs=1): err= 0: pid=71549: Mon Jul 22 17:18:25 2024 00:14:06.307 read: IOPS=64, BW=8227KiB/s (8425kB/s)(60.0MiB/7468msec) 00:14:06.307 slat (usec): min=5, max=3161, avg=59.00, stdev=199.62 00:14:06.307 clat (msec): min=4, max=364, avg=29.61, stdev=54.13 00:14:06.307 lat (msec): min=4, max=364, avg=29.67, stdev=54.13 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:14:06.307 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 17], 00:14:06.307 | 70.00th=[ 19], 80.00th=[ 25], 90.00th=[ 56], 95.00th=[ 150], 00:14:06.307 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:14:06.307 | 99.99th=[ 363] 00:14:06.307 write: IOPS=59, BW=7654KiB/s (7838kB/s)(61.9MiB/8278msec); 0 zone resets 00:14:06.307 slat (usec): min=35, max=1413, avg=149.07, stdev=191.59 00:14:06.307 clat (msec): min=40, max=353, avg=132.84, stdev=58.55 00:14:06.307 lat (msec): min=40, max=353, avg=132.99, stdev=58.58 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 45], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 80], 00:14:06.307 | 30.00th=[ 91], 40.00th=[ 106], 50.00th=[ 118], 60.00th=[ 134], 00:14:06.307 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 220], 95.00th=[ 245], 00:14:06.307 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 355], 00:14:06.307 | 99.99th=[ 355] 00:14:06.307 bw ( KiB/s): min= 512, max=12569, per=0.64%, avg=6242.80, stdev=3556.99, samples=20 00:14:06.307 iops : min= 4, max= 98, avg=48.65, stdev=27.90, samples=20 00:14:06.307 lat (msec) : 10=13.44%, 20=22.05%, 50=9.13%, 100=20.10%, 250=32.00% 00:14:06.307 lat (msec) : 500=3.28% 00:14:06.307 cpu : usr=0.42%, sys=0.17%, ctx=1601, majf=0, minf=5 00:14:06.307 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=480,495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.307 job33: (groupid=0, jobs=1): err= 0: pid=71550: Mon Jul 22 17:18:25 2024 00:14:06.307 read: IOPS=74, BW=9489KiB/s (9717kB/s)(80.0MiB/8633msec) 00:14:06.307 slat (usec): min=6, max=1319, avg=62.70, stdev=129.44 00:14:06.307 clat (msec): min=5, max=157, avg=16.20, stdev=14.46 00:14:06.307 lat (msec): min=5, max=157, avg=16.26, stdev=14.46 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:14:06.307 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:14:06.307 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 32], 00:14:06.307 | 99.00th=[ 93], 99.50th=[ 105], 99.90th=[ 159], 99.95th=[ 159], 00:14:06.307 | 99.99th=[ 159] 00:14:06.307 write: IOPS=73, BW=9460KiB/s (9687kB/s)(81.0MiB/8768msec); 0 zone resets 00:14:06.307 slat (usec): min=33, max=2428, avg=133.98, stdev=197.53 00:14:06.307 clat (msec): min=16, max=375, avg=107.46, stdev=58.16 00:14:06.307 lat (msec): min=16, max=375, avg=107.59, stdev=58.16 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 19], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.307 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 92], 00:14:06.307 | 70.00th=[ 103], 80.00th=[ 142], 90.00th=[ 215], 95.00th=[ 232], 00:14:06.307 | 99.00th=[ 305], 99.50th=[ 338], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.307 | 99.99th=[ 376] 00:14:06.307 bw ( KiB/s): min= 2043, max=16640, per=0.88%, avg=8635.05, stdev=4409.39, samples=19 00:14:06.307 iops : min= 15, max= 130, avg=67.37, stdev=34.50, samples=19 00:14:06.307 lat (msec) : 10=14.67%, 20=27.64%, 50=8.00%, 100=33.31%, 250=14.60% 00:14:06.307 lat (msec) : 500=1.79% 00:14:06.307 cpu : usr=0.53%, sys=0.20%, ctx=2064, majf=0, minf=1 00:14:06.307 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.307 job34: (groupid=0, jobs=1): err= 0: pid=71551: Mon Jul 22 17:18:25 2024 00:14:06.307 read: IOPS=60, BW=7762KiB/s (7949kB/s)(60.0MiB/7915msec) 00:14:06.307 slat (usec): min=6, max=1684, avg=77.01, stdev=164.70 00:14:06.307 clat (usec): min=5183, max=79448, avg=23255.80, stdev=13996.16 00:14:06.307 lat (usec): min=5528, max=79462, avg=23332.81, stdev=13995.10 00:14:06.307 clat percentiles (usec): 00:14:06.307 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[11469], 00:14:06.307 | 30.00th=[13829], 40.00th=[17695], 50.00th=[20317], 60.00th=[23462], 00:14:06.307 | 70.00th=[27395], 80.00th=[31589], 90.00th=[41157], 95.00th=[50594], 00:14:06.307 | 99.00th=[72877], 99.50th=[72877], 99.90th=[79168], 99.95th=[79168], 00:14:06.307 | 99.99th=[79168] 00:14:06.307 write: IOPS=70, BW=9058KiB/s (9276kB/s)(76.5MiB/8648msec); 0 zone resets 00:14:06.307 slat (usec): min=39, max=2367, avg=136.30, stdev=190.11 00:14:06.307 clat (msec): min=64, max=486, avg=111.91, stdev=53.27 00:14:06.307 lat (msec): min=64, max=486, avg=112.04, stdev=53.28 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:14:06.307 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 104], 00:14:06.307 | 70.00th=[ 116], 80.00th=[ 134], 90.00th=[ 178], 95.00th=[ 209], 00:14:06.307 | 99.00th=[ 372], 99.50th=[ 464], 99.90th=[ 485], 99.95th=[ 485], 00:14:06.307 | 99.99th=[ 485] 00:14:06.307 bw ( KiB/s): min= 1792, max=12263, per=0.83%, avg=8138.42, stdev=3464.11, samples=19 00:14:06.307 iops : min= 14, max= 95, avg=63.32, stdev=26.97, samples=19 00:14:06.307 lat (msec) : 10=5.49%, 20=16.12%, 50=19.87%, 100=33.15%, 250=24.27% 00:14:06.307 lat (msec) : 500=1.10% 00:14:06.307 cpu : usr=0.40%, sys=0.25%, ctx=1820, majf=0, minf=7 00:14:06.307 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=480,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.307 job35: (groupid=0, jobs=1): err= 0: pid=71552: Mon Jul 22 17:18:25 2024 00:14:06.307 read: IOPS=62, BW=7952KiB/s (8143kB/s)(60.0MiB/7726msec) 00:14:06.307 slat (usec): min=6, max=1019, avg=61.76, stdev=108.97 00:14:06.307 clat (usec): min=8537, max=70196, avg=22069.27, stdev=13813.20 00:14:06.307 lat (usec): min=8639, max=70207, avg=22131.03, stdev=13817.05 00:14:06.307 clat percentiles (usec): 00:14:06.307 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11863], 00:14:06.307 | 30.00th=[13304], 40.00th=[14615], 50.00th=[15795], 60.00th=[18744], 00:14:06.307 | 70.00th=[24249], 80.00th=[32637], 90.00th=[43254], 95.00th=[53740], 00:14:06.307 | 99.00th=[64750], 99.50th=[64750], 99.90th=[69731], 99.95th=[69731], 00:14:06.307 | 99.99th=[69731] 00:14:06.307 write: IOPS=66, BW=8508KiB/s (8712kB/s)(72.2MiB/8696msec); 0 zone resets 00:14:06.307 slat (usec): min=35, max=9335, avg=142.32, stdev=419.50 00:14:06.307 clat (msec): min=68, max=486, avg=119.16, stdev=64.41 00:14:06.307 lat (msec): min=68, max=487, avg=119.30, stdev=64.41 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 71], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 79], 00:14:06.307 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 102], 00:14:06.307 | 70.00th=[ 121], 80.00th=[ 144], 90.00th=[ 199], 95.00th=[ 257], 00:14:06.307 | 99.00th=[ 368], 99.50th=[ 472], 99.90th=[ 489], 99.95th=[ 489], 00:14:06.307 | 99.99th=[ 489] 00:14:06.307 bw ( KiB/s): min= 1792, max=12800, per=0.78%, avg=7682.42, stdev=3590.10, samples=19 00:14:06.307 iops : min= 14, max= 100, avg=59.79, stdev=27.96, samples=19 00:14:06.307 lat (msec) : 10=4.73%, 20=23.44%, 50=13.99%, 100=35.07%, 250=19.66% 00:14:06.307 lat (msec) : 500=3.12% 00:14:06.307 cpu : usr=0.44%, sys=0.13%, ctx=1852, majf=0, minf=11 00:14:06.307 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=480,578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.307 job36: (groupid=0, jobs=1): err= 0: pid=71553: Mon Jul 22 17:18:25 2024 00:14:06.307 read: IOPS=74, BW=9565KiB/s (9795kB/s)(77.6MiB/8310msec) 00:14:06.307 slat (usec): min=6, max=3445, avg=90.95, stdev=222.91 00:14:06.307 clat (usec): min=8939, max=73445, avg=20950.75, stdev=9825.68 00:14:06.307 lat (usec): min=9093, max=73605, avg=21041.71, stdev=9811.54 00:14:06.307 clat percentiles (usec): 00:14:06.307 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10552], 20.00th=[13304], 00:14:06.307 | 30.00th=[14222], 40.00th=[16188], 50.00th=[20055], 60.00th=[22414], 00:14:06.307 | 70.00th=[23987], 80.00th=[25560], 90.00th=[31065], 95.00th=[41681], 00:14:06.307 | 99.00th=[54789], 99.50th=[62653], 99.90th=[73925], 99.95th=[73925], 00:14:06.307 | 99.99th=[73925] 00:14:06.307 write: IOPS=76, BW=9786KiB/s (10.0MB/s)(80.0MiB/8371msec); 0 zone resets 00:14:06.307 slat (usec): min=31, max=28310, avg=190.08, stdev=1137.22 00:14:06.307 clat (msec): min=28, max=375, avg=103.51, stdev=48.26 00:14:06.307 lat (msec): min=29, max=375, avg=103.70, stdev=48.20 00:14:06.307 clat percentiles (msec): 00:14:06.307 | 1.00th=[ 35], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.307 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 92], 00:14:06.307 | 70.00th=[ 102], 80.00th=[ 116], 90.00th=[ 178], 95.00th=[ 213], 00:14:06.307 | 99.00th=[ 255], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.307 | 99.99th=[ 376] 00:14:06.307 bw ( KiB/s): min= 1532, max=13824, per=0.88%, avg=8617.11, stdev=4091.08, samples=19 00:14:06.307 iops : min= 11, max= 108, avg=67.16, stdev=32.06, samples=19 00:14:06.307 lat (msec) : 10=2.46%, 20=21.89%, 50=24.58%, 100=35.61%, 250=14.83% 00:14:06.307 lat (msec) : 500=0.63% 00:14:06.307 cpu : usr=0.43%, sys=0.27%, ctx=2118, majf=0, minf=3 00:14:06.307 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.307 issued rwts: total=621,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job37: (groupid=0, jobs=1): err= 0: pid=71554: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=60, BW=7711KiB/s (7896kB/s)(61.0MiB/8101msec) 00:14:06.308 slat (usec): min=6, max=3091, avg=69.42, stdev=181.97 00:14:06.308 clat (msec): min=11, max=150, avg=28.15, stdev=19.77 00:14:06.308 lat (msec): min=11, max=150, avg=28.22, stdev=19.77 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 18], 00:14:06.308 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:14:06.308 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 44], 95.00th=[ 74], 00:14:06.308 | 99.00th=[ 121], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:14:06.308 | 99.99th=[ 150] 00:14:06.308 write: IOPS=77, BW=9894KiB/s (10.1MB/s)(80.0MiB/8280msec); 0 zone resets 00:14:06.308 slat (usec): min=39, max=18855, avg=172.82, stdev=766.59 00:14:06.308 clat (msec): min=47, max=474, avg=102.06, stdev=53.22 00:14:06.308 lat (msec): min=49, max=474, avg=102.23, stdev=53.20 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 54], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.308 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 90], 00:14:06.308 | 70.00th=[ 99], 80.00th=[ 112], 90.00th=[ 138], 95.00th=[ 215], 00:14:06.308 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 477], 99.95th=[ 477], 00:14:06.308 | 99.99th=[ 477] 00:14:06.308 bw ( KiB/s): min= 512, max=13312, per=0.88%, avg=8686.83, stdev=4302.52, samples=18 00:14:06.308 iops : min= 4, max= 104, avg=67.83, stdev=33.58, samples=18 00:14:06.308 lat (msec) : 20=14.10%, 50=25.71%, 100=43.79%, 250=14.10%, 500=2.30% 00:14:06.308 cpu : usr=0.40%, sys=0.24%, ctx=1967, majf=0, minf=1 00:14:06.308 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 issued rwts: total=488,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job38: (groupid=0, jobs=1): err= 0: pid=71555: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=58, BW=7457KiB/s (7636kB/s)(60.0MiB/8239msec) 00:14:06.308 slat (usec): min=7, max=1078, avg=56.22, stdev=104.61 00:14:06.308 clat (msec): min=9, max=355, avg=27.31, stdev=41.40 00:14:06.308 lat (msec): min=9, max=355, avg=27.36, stdev=41.40 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:14:06.308 | 30.00th=[ 15], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 23], 00:14:06.308 | 70.00th=[ 25], 80.00th=[ 27], 90.00th=[ 43], 95.00th=[ 51], 00:14:06.308 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 355], 00:14:06.308 | 99.99th=[ 355] 00:14:06.308 write: IOPS=73, BW=9450KiB/s (9677kB/s)(77.5MiB/8398msec); 0 zone resets 00:14:06.308 slat (usec): min=31, max=2380, avg=131.55, stdev=208.81 00:14:06.308 clat (msec): min=56, max=407, avg=107.32, stdev=50.82 00:14:06.308 lat (msec): min=56, max=407, avg=107.45, stdev=50.82 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 63], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:14:06.308 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 99], 00:14:06.308 | 70.00th=[ 106], 80.00th=[ 125], 90.00th=[ 157], 95.00th=[ 222], 00:14:06.308 | 99.00th=[ 317], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:14:06.308 | 99.99th=[ 409] 00:14:06.308 bw ( KiB/s): min= 1792, max=13312, per=0.84%, avg=8242.89, stdev=4050.99, samples=19 00:14:06.308 iops : min= 14, max= 104, avg=64.05, stdev=31.59, samples=19 00:14:06.308 lat (msec) : 10=0.18%, 20=21.09%, 50=20.18%, 100=36.73%, 250=19.36% 00:14:06.308 lat (msec) : 500=2.45% 00:14:06.308 cpu : usr=0.37%, sys=0.26%, ctx=1834, majf=0, minf=5 00:14:06.308 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 issued rwts: total=480,620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job39: (groupid=0, jobs=1): err= 0: pid=71556: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=73, BW=9409KiB/s (9634kB/s)(80.0MiB/8707msec) 00:14:06.308 slat (usec): min=6, max=1085, avg=46.14, stdev=85.71 00:14:06.308 clat (usec): min=4871, max=87992, avg=12035.21, stdev=7896.59 00:14:06.308 lat (usec): min=5218, max=88013, avg=12081.35, stdev=7898.02 00:14:06.308 clat percentiles (usec): 00:14:06.308 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 8160], 00:14:06.308 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:14:06.308 | 70.00th=[12518], 80.00th=[13566], 90.00th=[16450], 95.00th=[20317], 00:14:06.308 | 99.00th=[47973], 99.50th=[82314], 99.90th=[87557], 99.95th=[87557], 00:14:06.308 | 99.99th=[87557] 00:14:06.308 write: IOPS=71, BW=9091KiB/s (9309kB/s)(81.0MiB/9124msec); 0 zone resets 00:14:06.308 slat (usec): min=32, max=3856, avg=142.35, stdev=239.26 00:14:06.308 clat (msec): min=3, max=343, avg=111.96, stdev=58.54 00:14:06.308 lat (msec): min=3, max=343, avg=112.10, stdev=58.56 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 8], 5.00th=[ 49], 10.00th=[ 72], 20.00th=[ 75], 00:14:06.308 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 100], 00:14:06.308 | 70.00th=[ 121], 80.00th=[ 155], 90.00th=[ 205], 95.00th=[ 232], 00:14:06.308 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 342], 00:14:06.308 | 99.99th=[ 342] 00:14:06.308 bw ( KiB/s): min= 2043, max=18468, per=0.83%, avg=8190.50, stdev=4246.73, samples=20 00:14:06.308 iops : min= 15, max= 144, avg=63.75, stdev=33.26, samples=20 00:14:06.308 lat (msec) : 4=0.16%, 10=18.56%, 20=30.43%, 50=2.72%, 100=28.18% 00:14:06.308 lat (msec) : 250=18.25%, 500=1.71% 00:14:06.308 cpu : usr=0.53%, sys=0.19%, ctx=2110, majf=0, minf=3 00:14:06.308 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job40: (groupid=0, jobs=1): err= 0: pid=71557: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=75, BW=9686KiB/s (9918kB/s)(80.0MiB/8458msec) 00:14:06.308 slat (usec): min=6, max=1523, avg=68.97, stdev=132.32 00:14:06.308 clat (usec): min=5821, max=87941, avg=17516.95, stdev=12480.02 00:14:06.308 lat (usec): min=5834, max=87957, avg=17585.93, stdev=12478.15 00:14:06.308 clat percentiles (usec): 00:14:06.308 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[10028], 00:14:06.308 | 30.00th=[11338], 40.00th=[12649], 50.00th=[14353], 60.00th=[15533], 00:14:06.308 | 70.00th=[17695], 80.00th=[21890], 90.00th=[27132], 95.00th=[35914], 00:14:06.308 | 99.00th=[76022], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:14:06.308 | 99.99th=[87557] 00:14:06.308 write: IOPS=76, BW=9739KiB/s (9973kB/s)(82.2MiB/8648msec); 0 zone resets 00:14:06.308 slat (usec): min=32, max=1440, avg=126.92, stdev=148.37 00:14:06.308 clat (msec): min=6, max=377, avg=104.22, stdev=52.10 00:14:06.308 lat (msec): min=6, max=377, avg=104.35, stdev=52.09 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 14], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 72], 00:14:06.308 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 96], 00:14:06.308 | 70.00th=[ 112], 80.00th=[ 124], 90.00th=[ 161], 95.00th=[ 222], 00:14:06.308 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.308 | 99.99th=[ 376] 00:14:06.308 bw ( KiB/s): min= 1792, max=15872, per=0.85%, avg=8325.40, stdev=4371.15, samples=20 00:14:06.308 iops : min= 14, max= 124, avg=64.95, stdev=34.19, samples=20 00:14:06.308 lat (msec) : 10=10.32%, 20=27.04%, 50=11.40%, 100=32.13%, 250=17.64% 00:14:06.308 lat (msec) : 500=1.46% 00:14:06.308 cpu : usr=0.54%, sys=0.18%, ctx=2137, majf=0, minf=5 00:14:06.308 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 issued rwts: total=640,658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job41: (groupid=0, jobs=1): err= 0: pid=71558: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=76, BW=9764KiB/s (9998kB/s)(80.0MiB/8390msec) 00:14:06.308 slat (usec): min=6, max=2056, avg=67.60, stdev=168.70 00:14:06.308 clat (msec): min=7, max=112, avg=23.76, stdev=13.59 00:14:06.308 lat (msec): min=7, max=112, avg=23.83, stdev=13.58 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:14:06.308 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 23], 00:14:06.308 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 36], 95.00th=[ 51], 00:14:06.308 | 99.00th=[ 85], 99.50th=[ 91], 99.90th=[ 113], 99.95th=[ 113], 00:14:06.308 | 99.99th=[ 113] 00:14:06.308 write: IOPS=79, BW=9.94MiB/s (10.4MB/s)(81.0MiB/8148msec); 0 zone resets 00:14:06.308 slat (usec): min=30, max=1894, avg=135.36, stdev=186.44 00:14:06.308 clat (msec): min=31, max=335, avg=99.72, stdev=35.76 00:14:06.308 lat (msec): min=31, max=335, avg=99.85, stdev=35.77 00:14:06.308 clat percentiles (msec): 00:14:06.308 | 1.00th=[ 38], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 72], 00:14:06.308 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 101], 00:14:06.308 | 70.00th=[ 111], 80.00th=[ 122], 90.00th=[ 138], 95.00th=[ 169], 00:14:06.308 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 334], 99.95th=[ 334], 00:14:06.308 | 99.99th=[ 334] 00:14:06.308 bw ( KiB/s): min= 512, max=13824, per=0.83%, avg=8187.70, stdev=4439.53, samples=20 00:14:06.308 iops : min= 4, max= 108, avg=63.75, stdev=34.83, samples=20 00:14:06.308 lat (msec) : 10=0.47%, 20=24.61%, 50=22.67%, 100=31.75%, 250=20.34% 00:14:06.308 lat (msec) : 500=0.16% 00:14:06.308 cpu : usr=0.42%, sys=0.30%, ctx=2184, majf=0, minf=5 00:14:06.308 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.308 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.308 job42: (groupid=0, jobs=1): err= 0: pid=71559: Mon Jul 22 17:18:25 2024 00:14:06.308 read: IOPS=59, BW=7582KiB/s (7764kB/s)(60.0MiB/8103msec) 00:14:06.309 slat (usec): min=7, max=1188, avg=60.20, stdev=126.04 00:14:06.309 clat (usec): min=9288, max=88541, avg=21199.74, stdev=10056.53 00:14:06.309 lat (usec): min=9316, max=88561, avg=21259.94, stdev=10048.85 00:14:06.309 clat percentiles (usec): 00:14:06.309 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[12518], 20.00th=[14353], 00:14:06.309 | 30.00th=[15401], 40.00th=[16909], 50.00th=[18744], 60.00th=[20055], 00:14:06.309 | 70.00th=[23200], 80.00th=[27395], 90.00th=[32375], 95.00th=[37487], 00:14:06.309 | 99.00th=[63701], 99.50th=[77071], 99.90th=[88605], 99.95th=[88605], 00:14:06.309 | 99.99th=[88605] 00:14:06.309 write: IOPS=72, BW=9301KiB/s (9524kB/s)(79.9MiB/8794msec); 0 zone resets 00:14:06.309 slat (usec): min=38, max=6773, avg=147.70, stdev=383.51 00:14:06.309 clat (msec): min=2, max=398, avg=108.89, stdev=50.16 00:14:06.309 lat (msec): min=2, max=398, avg=109.04, stdev=50.17 00:14:06.309 clat percentiles (msec): 00:14:06.309 | 1.00th=[ 20], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 74], 00:14:06.309 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 105], 00:14:06.309 | 70.00th=[ 118], 80.00th=[ 131], 90.00th=[ 165], 95.00th=[ 207], 00:14:06.309 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 397], 99.95th=[ 397], 00:14:06.309 | 99.99th=[ 397] 00:14:06.309 bw ( KiB/s): min= 1792, max=15134, per=0.87%, avg=8501.63, stdev=3718.08, samples=19 00:14:06.309 iops : min= 14, max= 118, avg=66.32, stdev=29.03, samples=19 00:14:06.309 lat (msec) : 4=0.09%, 10=0.89%, 20=25.38%, 50=16.80%, 100=30.92% 00:14:06.309 lat (msec) : 250=24.40%, 500=1.52% 00:14:06.309 cpu : usr=0.42%, sys=0.26%, ctx=1805, majf=0, minf=5 00:14:06.309 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 issued rwts: total=480,639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.309 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.309 job43: (groupid=0, jobs=1): err= 0: pid=71560: Mon Jul 22 17:18:25 2024 00:14:06.309 read: IOPS=63, BW=8139KiB/s (8334kB/s)(60.0MiB/7549msec) 00:14:06.309 slat (usec): min=7, max=958, avg=52.97, stdev=104.66 00:14:06.309 clat (usec): min=5187, max=52999, avg=11998.20, stdev=6593.13 00:14:06.309 lat (usec): min=5207, max=53013, avg=12051.18, stdev=6596.72 00:14:06.309 clat percentiles (usec): 00:14:06.309 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 8225], 00:14:06.309 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11469], 00:14:06.309 | 70.00th=[12649], 80.00th=[15139], 90.00th=[16909], 95.00th=[20055], 00:14:06.309 | 99.00th=[45351], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:14:06.309 | 99.99th=[53216] 00:14:06.309 write: IOPS=65, BW=8384KiB/s (8585kB/s)(76.4MiB/9328msec); 0 zone resets 00:14:06.309 slat (usec): min=38, max=4553, avg=167.42, stdev=311.61 00:14:06.309 clat (msec): min=65, max=307, avg=121.40, stdev=40.60 00:14:06.309 lat (msec): min=65, max=307, avg=121.56, stdev=40.60 00:14:06.309 clat percentiles (msec): 00:14:06.309 | 1.00th=[ 69], 5.00th=[ 74], 10.00th=[ 79], 20.00th=[ 88], 00:14:06.309 | 30.00th=[ 96], 40.00th=[ 103], 50.00th=[ 113], 60.00th=[ 123], 00:14:06.309 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 169], 95.00th=[ 203], 00:14:06.309 | 99.00th=[ 257], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:14:06.309 | 99.99th=[ 309] 00:14:06.309 bw ( KiB/s): min= 4096, max=11497, per=0.79%, avg=7714.65, stdev=2420.14, samples=20 00:14:06.309 iops : min= 32, max= 89, avg=60.05, stdev=18.84, samples=20 00:14:06.309 lat (msec) : 10=20.81%, 20=20.90%, 50=2.20%, 100=21.08%, 250=34.28% 00:14:06.309 lat (msec) : 500=0.73% 00:14:06.309 cpu : usr=0.44%, sys=0.24%, ctx=1789, majf=0, minf=5 00:14:06.309 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 issued rwts: total=480,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.309 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.309 job44: (groupid=0, jobs=1): err= 0: pid=71561: Mon Jul 22 17:18:25 2024 00:14:06.309 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(80.0MiB/8155msec) 00:14:06.309 slat (usec): min=6, max=3465, avg=69.15, stdev=182.26 00:14:06.309 clat (usec): min=7568, max=84154, avg=21829.04, stdev=10609.60 00:14:06.309 lat (usec): min=7583, max=84164, avg=21898.19, stdev=10602.69 00:14:06.309 clat percentiles (usec): 00:14:06.309 | 1.00th=[10552], 5.00th=[11731], 10.00th=[12256], 20.00th=[13435], 00:14:06.309 | 30.00th=[15139], 40.00th=[16712], 50.00th=[18744], 60.00th=[21103], 00:14:06.309 | 70.00th=[24773], 80.00th=[28181], 90.00th=[34341], 95.00th=[38536], 00:14:06.309 | 99.00th=[67634], 99.50th=[78119], 99.90th=[84411], 99.95th=[84411], 00:14:06.309 | 99.99th=[84411] 00:14:06.309 write: IOPS=78, BW=9.86MiB/s (10.3MB/s)(81.9MiB/8302msec); 0 zone resets 00:14:06.309 slat (usec): min=38, max=5161, avg=136.15, stdev=256.87 00:14:06.309 clat (msec): min=48, max=323, avg=100.32, stdev=41.83 00:14:06.309 lat (msec): min=49, max=323, avg=100.46, stdev=41.84 00:14:06.309 clat percentiles (msec): 00:14:06.309 | 1.00th=[ 55], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 72], 00:14:06.309 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 94], 00:14:06.309 | 70.00th=[ 104], 80.00th=[ 122], 90.00th=[ 148], 95.00th=[ 186], 00:14:06.309 | 99.00th=[ 268], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 326], 00:14:06.309 | 99.99th=[ 326] 00:14:06.309 bw ( KiB/s): min= 1536, max=13312, per=0.94%, avg=9195.56, stdev=3903.69, samples=18 00:14:06.309 iops : min= 12, max= 104, avg=71.61, stdev=30.57, samples=18 00:14:06.309 lat (msec) : 10=0.46%, 20=26.41%, 50=21.54%, 100=35.06%, 250=15.60% 00:14:06.309 lat (msec) : 500=0.93% 00:14:06.309 cpu : usr=0.39%, sys=0.33%, ctx=2198, majf=0, minf=1 00:14:06.309 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 issued rwts: total=640,655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.309 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.309 job45: (groupid=0, jobs=1): err= 0: pid=71562: Mon Jul 22 17:18:25 2024 00:14:06.309 read: IOPS=66, BW=8507KiB/s (8711kB/s)(67.9MiB/8170msec) 00:14:06.309 slat (usec): min=6, max=1195, avg=61.26, stdev=120.66 00:14:06.309 clat (usec): min=10170, max=82203, avg=24979.65, stdev=12270.14 00:14:06.309 lat (usec): min=10290, max=82209, avg=25040.90, stdev=12265.95 00:14:06.309 clat percentiles (usec): 00:14:06.309 | 1.00th=[10945], 5.00th=[12518], 10.00th=[12911], 20.00th=[14877], 00:14:06.309 | 30.00th=[17433], 40.00th=[19792], 50.00th=[22676], 60.00th=[25035], 00:14:06.309 | 70.00th=[27132], 80.00th=[30016], 90.00th=[42206], 95.00th=[51643], 00:14:06.309 | 99.00th=[70779], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:14:06.309 | 99.99th=[82314] 00:14:06.309 write: IOPS=77, BW=9868KiB/s (10.1MB/s)(80.0MiB/8302msec); 0 zone resets 00:14:06.309 slat (usec): min=30, max=1714, avg=129.01, stdev=176.79 00:14:06.309 clat (msec): min=63, max=320, avg=102.55, stdev=40.38 00:14:06.309 lat (msec): min=63, max=320, avg=102.68, stdev=40.40 00:14:06.309 clat percentiles (msec): 00:14:06.309 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 72], 00:14:06.309 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 103], 00:14:06.309 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 144], 95.00th=[ 169], 00:14:06.309 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 321], 99.95th=[ 321], 00:14:06.309 | 99.99th=[ 321] 00:14:06.309 bw ( KiB/s): min= 1539, max=13568, per=0.86%, avg=8432.84, stdev=4022.42, samples=19 00:14:06.309 iops : min= 12, max= 106, avg=65.74, stdev=31.55, samples=19 00:14:06.309 lat (msec) : 20=18.68%, 50=24.60%, 100=33.98%, 250=21.64%, 500=1.10% 00:14:06.309 cpu : usr=0.42%, sys=0.25%, ctx=1908, majf=0, minf=3 00:14:06.309 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.309 issued rwts: total=543,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.309 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.309 job46: (groupid=0, jobs=1): err= 0: pid=71567: Mon Jul 22 17:18:25 2024 00:14:06.309 read: IOPS=58, BW=7434KiB/s (7612kB/s)(60.0MiB/8265msec) 00:14:06.309 slat (usec): min=5, max=656, avg=51.33, stdev=85.52 00:14:06.309 clat (usec): min=6762, max=71819, avg=19151.47, stdev=11218.01 00:14:06.309 lat (usec): min=6794, max=71830, avg=19202.80, stdev=11212.73 00:14:06.309 clat percentiles (usec): 00:14:06.309 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 9110], 00:14:06.309 | 30.00th=[11338], 40.00th=[14222], 50.00th=[16319], 60.00th=[19792], 00:14:06.309 | 70.00th=[22676], 80.00th=[26084], 90.00th=[33817], 95.00th=[38536], 00:14:06.309 | 99.00th=[66847], 99.50th=[66847], 99.90th=[71828], 99.95th=[71828], 00:14:06.310 | 99.99th=[71828] 00:14:06.310 write: IOPS=71, BW=9144KiB/s (9363kB/s)(79.5MiB/8903msec); 0 zone resets 00:14:06.310 slat (usec): min=38, max=2055, avg=137.14, stdev=207.43 00:14:06.310 clat (msec): min=55, max=294, avg=111.13, stdev=44.83 00:14:06.310 lat (msec): min=55, max=294, avg=111.27, stdev=44.84 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 62], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 73], 00:14:06.310 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 110], 00:14:06.310 | 70.00th=[ 124], 80.00th=[ 142], 90.00th=[ 174], 95.00th=[ 207], 00:14:06.310 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:14:06.310 | 99.99th=[ 296] 00:14:06.310 bw ( KiB/s): min= 255, max=13568, per=0.82%, avg=8047.10, stdev=4005.31, samples=20 00:14:06.310 iops : min= 1, max= 106, avg=62.65, stdev=31.39, samples=20 00:14:06.310 lat (msec) : 10=10.04%, 20=16.13%, 50=15.77%, 100=30.56%, 250=26.70% 00:14:06.310 lat (msec) : 500=0.81% 00:14:06.310 cpu : usr=0.33%, sys=0.32%, ctx=1798, majf=0, minf=7 00:14:06.310 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 issued rwts: total=480,636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.310 job47: (groupid=0, jobs=1): err= 0: pid=71568: Mon Jul 22 17:18:25 2024 00:14:06.310 read: IOPS=65, BW=8356KiB/s (8556kB/s)(60.0MiB/7353msec) 00:14:06.310 slat (usec): min=6, max=791, avg=52.02, stdev=95.27 00:14:06.310 clat (usec): min=5050, max=18986, avg=9687.06, stdev=3042.33 00:14:06.310 lat (usec): min=5078, max=18997, avg=9739.08, stdev=3050.85 00:14:06.310 clat percentiles (usec): 00:14:06.310 | 1.00th=[ 5342], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6652], 00:14:06.310 | 30.00th=[ 7832], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10028], 00:14:06.310 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13042], 95.00th=[15533], 00:14:06.310 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:14:06.310 | 99.99th=[19006] 00:14:06.310 write: IOPS=65, BW=8342KiB/s (8542kB/s)(77.1MiB/9467msec); 0 zone resets 00:14:06.310 slat (usec): min=30, max=1466, avg=148.78, stdev=203.42 00:14:06.310 clat (msec): min=66, max=281, avg=122.00, stdev=43.80 00:14:06.310 lat (msec): min=66, max=281, avg=122.15, stdev=43.82 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 84], 00:14:06.310 | 30.00th=[ 93], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 123], 00:14:06.310 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 182], 95.00th=[ 213], 00:14:06.310 | 99.00th=[ 262], 99.50th=[ 268], 99.90th=[ 279], 99.95th=[ 279], 00:14:06.310 | 99.99th=[ 279] 00:14:06.310 bw ( KiB/s): min= 3584, max=12288, per=0.79%, avg=7791.35, stdev=2438.58, samples=20 00:14:06.310 iops : min= 28, max= 96, avg=60.70, stdev=19.13, samples=20 00:14:06.310 lat (msec) : 10=25.71%, 20=18.05%, 100=21.60%, 250=33.82%, 500=0.82% 00:14:06.310 cpu : usr=0.41%, sys=0.18%, ctx=1881, majf=0, minf=3 00:14:06.310 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 issued rwts: total=480,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.310 job48: (groupid=0, jobs=1): err= 0: pid=71569: Mon Jul 22 17:18:25 2024 00:14:06.310 read: IOPS=58, BW=7551KiB/s (7732kB/s)(60.0MiB/8137msec) 00:14:06.310 slat (usec): min=7, max=741, avg=65.28, stdev=112.16 00:14:06.310 clat (usec): min=4943, max=76872, avg=18686.25, stdev=12256.06 00:14:06.310 lat (usec): min=4960, max=76888, avg=18751.53, stdev=12258.40 00:14:06.310 clat percentiles (usec): 00:14:06.310 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 7635], 20.00th=[10028], 00:14:06.310 | 30.00th=[11994], 40.00th=[13042], 50.00th=[14484], 60.00th=[16909], 00:14:06.310 | 70.00th=[22938], 80.00th=[25822], 90.00th=[30016], 95.00th=[47973], 00:14:06.310 | 99.00th=[68682], 99.50th=[69731], 99.90th=[77071], 99.95th=[77071], 00:14:06.310 | 99.99th=[77071] 00:14:06.310 write: IOPS=67, BW=8613KiB/s (8819kB/s)(75.1MiB/8932msec); 0 zone resets 00:14:06.310 slat (usec): min=37, max=3776, avg=149.03, stdev=237.13 00:14:06.310 clat (msec): min=39, max=374, avg=117.72, stdev=56.04 00:14:06.310 lat (msec): min=39, max=374, avg=117.87, stdev=56.05 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 47], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 74], 00:14:06.310 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 104], 60.00th=[ 116], 00:14:06.310 | 70.00th=[ 128], 80.00th=[ 150], 90.00th=[ 184], 95.00th=[ 228], 00:14:06.310 | 99.00th=[ 334], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.310 | 99.99th=[ 376] 00:14:06.310 bw ( KiB/s): min= 2810, max=13312, per=0.81%, avg=7994.84, stdev=3141.91, samples=19 00:14:06.310 iops : min= 21, max= 104, avg=62.26, stdev=24.73, samples=19 00:14:06.310 lat (msec) : 10=8.60%, 20=21.28%, 50=13.32%, 100=27.75%, 250=26.92% 00:14:06.310 lat (msec) : 500=2.13% 00:14:06.310 cpu : usr=0.35%, sys=0.26%, ctx=1885, majf=0, minf=1 00:14:06.310 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 issued rwts: total=480,601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.310 job49: (groupid=0, jobs=1): err= 0: pid=71571: Mon Jul 22 17:18:25 2024 00:14:06.310 read: IOPS=61, BW=7891KiB/s (8080kB/s)(60.0MiB/7786msec) 00:14:06.310 slat (usec): min=6, max=977, avg=62.70, stdev=125.24 00:14:06.310 clat (msec): min=6, max=457, avg=24.26, stdev=52.74 00:14:06.310 lat (msec): min=6, max=457, avg=24.32, stdev=52.74 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:14:06.310 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.310 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 34], 95.00th=[ 75], 00:14:06.310 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 456], 00:14:06.310 | 99.99th=[ 456] 00:14:06.310 write: IOPS=62, BW=8061KiB/s (8254kB/s)(67.6MiB/8591msec); 0 zone resets 00:14:06.310 slat (usec): min=37, max=1543, avg=127.36, stdev=152.41 00:14:06.310 clat (msec): min=22, max=328, avg=126.17, stdev=51.33 00:14:06.310 lat (msec): min=22, max=328, avg=126.29, stdev=51.35 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 28], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 80], 00:14:06.310 | 30.00th=[ 91], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 132], 00:14:06.310 | 70.00th=[ 150], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 222], 00:14:06.310 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 330], 00:14:06.310 | 99.99th=[ 330] 00:14:06.310 bw ( KiB/s): min= 1532, max=12288, per=0.73%, avg=7185.05, stdev=3109.43, samples=19 00:14:06.310 iops : min= 11, max= 96, avg=55.79, stdev=24.55, samples=19 00:14:06.310 lat (msec) : 10=18.81%, 20=17.53%, 50=8.13%, 100=21.45%, 250=32.03% 00:14:06.310 lat (msec) : 500=2.06% 00:14:06.310 cpu : usr=0.35%, sys=0.24%, ctx=1689, majf=0, minf=4 00:14:06.310 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 issued rwts: total=480,541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.310 job50: (groupid=0, jobs=1): err= 0: pid=71572: Mon Jul 22 17:18:25 2024 00:14:06.310 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8801msec) 00:14:06.310 slat (usec): min=6, max=1178, avg=55.27, stdev=118.78 00:14:06.310 clat (msec): min=3, max=109, avg=15.71, stdev=12.35 00:14:06.310 lat (msec): min=4, max=110, avg=15.76, stdev=12.37 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:14:06.310 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.310 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 34], 00:14:06.310 | 99.00th=[ 87], 99.50th=[ 104], 99.90th=[ 110], 99.95th=[ 110], 00:14:06.310 | 99.99th=[ 110] 00:14:06.310 write: IOPS=112, BW=14.0MiB/s (14.7MB/s)(119MiB/8473msec); 0 zone resets 00:14:06.310 slat (usec): min=32, max=28279, avg=156.66, stdev=926.81 00:14:06.310 clat (msec): min=13, max=229, avg=70.67, stdev=27.03 00:14:06.310 lat (msec): min=14, max=229, avg=70.83, stdev=26.98 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 29], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:14:06.310 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 68], 00:14:06.310 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 127], 00:14:06.310 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 230], 99.95th=[ 230], 00:14:06.310 | 99.99th=[ 230] 00:14:06.310 bw ( KiB/s): min= 2048, max=18981, per=1.23%, avg=12058.40, stdev=5502.07, samples=20 00:14:06.310 iops : min= 16, max= 148, avg=94.05, stdev=43.11, samples=20 00:14:06.310 lat (msec) : 4=0.06%, 10=12.46%, 20=25.26%, 50=14.00%, 100=42.23% 00:14:06.310 lat (msec) : 250=6.00% 00:14:06.310 cpu : usr=0.67%, sys=0.41%, ctx=2755, majf=0, minf=1 00:14:06.310 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.310 issued rwts: total=800,950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.310 job51: (groupid=0, jobs=1): err= 0: pid=71573: Mon Jul 22 17:18:25 2024 00:14:06.310 read: IOPS=92, BW=11.6MiB/s (12.1MB/s)(100MiB/8642msec) 00:14:06.310 slat (usec): min=6, max=1117, avg=56.07, stdev=117.67 00:14:06.310 clat (msec): min=3, max=100, avg=13.58, stdev=11.96 00:14:06.310 lat (msec): min=3, max=100, avg=13.63, stdev=11.95 00:14:06.310 clat percentiles (msec): 00:14:06.310 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.310 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.310 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 25], 00:14:06.310 | 99.00th=[ 88], 99.50th=[ 92], 99.90th=[ 101], 99.95th=[ 101], 00:14:06.310 | 99.99th=[ 101] 00:14:06.310 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(105MiB/8667msec); 0 zone resets 00:14:06.310 slat (usec): min=36, max=2034, avg=128.36, stdev=173.72 00:14:06.311 clat (msec): min=43, max=255, avg=81.91, stdev=34.46 00:14:06.311 lat (msec): min=43, max=255, avg=82.04, stdev=34.47 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 54], 00:14:06.311 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 81], 00:14:06.311 | 70.00th=[ 93], 80.00th=[ 110], 90.00th=[ 131], 95.00th=[ 144], 00:14:06.311 | 99.00th=[ 205], 99.50th=[ 211], 99.90th=[ 255], 99.95th=[ 255], 00:14:06.311 | 99.99th=[ 255] 00:14:06.311 bw ( KiB/s): min= 4608, max=17186, per=1.08%, avg=10638.85, stdev=4257.25, samples=20 00:14:06.311 iops : min= 36, max= 134, avg=82.90, stdev=33.19, samples=20 00:14:06.311 lat (msec) : 4=0.06%, 10=20.56%, 20=22.94%, 50=8.79%, 100=34.29% 00:14:06.311 lat (msec) : 250=13.30%, 500=0.06% 00:14:06.311 cpu : usr=0.54%, sys=0.39%, ctx=2687, majf=0, minf=3 00:14:06.311 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 issued rwts: total=800,839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.311 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.311 job52: (groupid=0, jobs=1): err= 0: pid=71574: Mon Jul 22 17:18:25 2024 00:14:06.311 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8552msec) 00:14:06.311 slat (usec): min=6, max=1660, avg=56.53, stdev=117.41 00:14:06.311 clat (usec): min=4605, max=38571, avg=11799.19, stdev=5712.27 00:14:06.311 lat (usec): min=4618, max=38585, avg=11855.71, stdev=5712.85 00:14:06.311 clat percentiles (usec): 00:14:06.311 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 7111], 00:14:06.311 | 30.00th=[ 8225], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11600], 00:14:06.311 | 70.00th=[12911], 80.00th=[15270], 90.00th=[19006], 95.00th=[23200], 00:14:06.311 | 99.00th=[31065], 99.50th=[34341], 99.90th=[38536], 99.95th=[38536], 00:14:06.311 | 99.99th=[38536] 00:14:06.311 write: IOPS=105, BW=13.2MiB/s (13.8MB/s)(117MiB/8847msec); 0 zone resets 00:14:06.311 slat (usec): min=36, max=3701, avg=127.94, stdev=200.86 00:14:06.311 clat (msec): min=37, max=264, avg=75.10, stdev=32.53 00:14:06.311 lat (msec): min=37, max=264, avg=75.23, stdev=32.53 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.311 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 69], 00:14:06.311 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 120], 95.00th=[ 140], 00:14:06.311 | 99.00th=[ 213], 99.50th=[ 230], 99.90th=[ 266], 99.95th=[ 266], 00:14:06.311 | 99.99th=[ 266] 00:14:06.311 bw ( KiB/s): min= 1792, max=17408, per=1.21%, avg=11835.10, stdev=5192.82, samples=20 00:14:06.311 iops : min= 14, max= 136, avg=92.25, stdev=40.49, samples=20 00:14:06.311 lat (msec) : 10=20.66%, 20=21.23%, 50=10.16%, 100=38.89%, 250=8.94% 00:14:06.311 lat (msec) : 500=0.12% 00:14:06.311 cpu : usr=0.62%, sys=0.39%, ctx=2884, majf=0, minf=1 00:14:06.311 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 issued rwts: total=800,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.311 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.311 job53: (groupid=0, jobs=1): err= 0: pid=71575: Mon Jul 22 17:18:25 2024 00:14:06.311 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8755msec) 00:14:06.311 slat (usec): min=5, max=1339, avg=53.91, stdev=102.30 00:14:06.311 clat (msec): min=4, max=173, avg=17.35, stdev=23.88 00:14:06.311 lat (msec): min=4, max=173, avg=17.40, stdev=23.87 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.311 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.311 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 26], 95.00th=[ 40], 00:14:06.311 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:14:06.311 | 99.99th=[ 174] 00:14:06.311 write: IOPS=98, BW=12.3MiB/s (12.9MB/s)(102MiB/8287msec); 0 zone resets 00:14:06.311 slat (usec): min=35, max=1283, avg=120.46, stdev=144.69 00:14:06.311 clat (msec): min=32, max=291, avg=80.75, stdev=40.10 00:14:06.311 lat (msec): min=32, max=291, avg=80.87, stdev=40.11 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.311 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 74], 00:14:06.311 | 70.00th=[ 88], 80.00th=[ 107], 90.00th=[ 128], 95.00th=[ 150], 00:14:06.311 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 292], 99.95th=[ 292], 00:14:06.311 | 99.99th=[ 292] 00:14:06.311 bw ( KiB/s): min= 1750, max=17664, per=1.05%, avg=10323.25, stdev=5201.42, samples=20 00:14:06.311 iops : min= 13, max= 138, avg=80.35, stdev=40.85, samples=20 00:14:06.311 lat (msec) : 10=19.58%, 20=22.92%, 50=9.36%, 100=35.38%, 250=12.21% 00:14:06.311 lat (msec) : 500=0.56% 00:14:06.311 cpu : usr=0.48%, sys=0.41%, ctx=2638, majf=0, minf=1 00:14:06.311 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 issued rwts: total=800,814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.311 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.311 job54: (groupid=0, jobs=1): err= 0: pid=71576: Mon Jul 22 17:18:25 2024 00:14:06.311 read: IOPS=79, BW=9.93MiB/s (10.4MB/s)(80.0MiB/8052msec) 00:14:06.311 slat (usec): min=6, max=767, avg=57.14, stdev=92.47 00:14:06.311 clat (msec): min=3, max=164, avg=15.66, stdev=17.60 00:14:06.311 lat (msec): min=3, max=164, avg=15.72, stdev=17.59 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:14:06.311 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.311 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 47], 00:14:06.311 | 99.00th=[ 94], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:14:06.311 | 99.99th=[ 165] 00:14:06.311 write: IOPS=91, BW=11.4MiB/s (11.9MB/s)(99.8MiB/8768msec); 0 zone resets 00:14:06.311 slat (usec): min=39, max=1656, avg=126.74, stdev=162.37 00:14:06.311 clat (msec): min=47, max=229, avg=87.44, stdev=31.06 00:14:06.311 lat (msec): min=47, max=229, avg=87.56, stdev=31.06 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 62], 00:14:06.311 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 89], 00:14:06.311 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 127], 95.00th=[ 144], 00:14:06.311 | 99.00th=[ 203], 99.50th=[ 220], 99.90th=[ 230], 99.95th=[ 230], 00:14:06.311 | 99.99th=[ 230] 00:14:06.311 bw ( KiB/s): min= 1792, max=17152, per=1.03%, avg=10115.95, stdev=4148.02, samples=20 00:14:06.311 iops : min= 14, max= 134, avg=78.90, stdev=32.36, samples=20 00:14:06.311 lat (msec) : 4=0.35%, 10=14.74%, 20=24.41%, 50=4.73%, 100=39.08% 00:14:06.311 lat (msec) : 250=16.69% 00:14:06.311 cpu : usr=0.49%, sys=0.33%, ctx=2475, majf=0, minf=5 00:14:06.311 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 issued rwts: total=640,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.311 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.311 job55: (groupid=0, jobs=1): err= 0: pid=71577: Mon Jul 22 17:18:25 2024 00:14:06.311 read: IOPS=88, BW=11.0MiB/s (11.6MB/s)(100MiB/9069msec) 00:14:06.311 slat (usec): min=6, max=1696, avg=77.12, stdev=162.87 00:14:06.311 clat (msec): min=3, max=109, avg=14.61, stdev=13.84 00:14:06.311 lat (msec): min=3, max=109, avg=14.69, stdev=13.84 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.311 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:06.311 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 21], 95.00th=[ 32], 00:14:06.311 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 110], 99.95th=[ 110], 00:14:06.311 | 99.99th=[ 110] 00:14:06.311 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(113MiB/8568msec); 0 zone resets 00:14:06.311 slat (usec): min=38, max=8514, avg=153.85, stdev=381.18 00:14:06.311 clat (msec): min=4, max=273, avg=75.40, stdev=34.98 00:14:06.311 lat (msec): min=4, max=273, avg=75.55, stdev=35.00 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 14], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 54], 00:14:06.311 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:14:06.311 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 121], 95.00th=[ 148], 00:14:06.311 | 99.00th=[ 226], 99.50th=[ 249], 99.90th=[ 275], 99.95th=[ 275], 00:14:06.311 | 99.99th=[ 275] 00:14:06.311 bw ( KiB/s): min= 2816, max=22528, per=1.17%, avg=11441.20, stdev=5609.88, samples=20 00:14:06.311 iops : min= 22, max= 176, avg=89.25, stdev=43.93, samples=20 00:14:06.311 lat (msec) : 4=0.24%, 10=17.75%, 20=24.81%, 50=9.35%, 100=38.80% 00:14:06.311 lat (msec) : 250=8.82%, 500=0.24% 00:14:06.311 cpu : usr=0.62%, sys=0.34%, ctx=2868, majf=0, minf=5 00:14:06.311 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.311 issued rwts: total=800,901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.311 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.311 job56: (groupid=0, jobs=1): err= 0: pid=71578: Mon Jul 22 17:18:25 2024 00:14:06.311 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(100MiB/9038msec) 00:14:06.311 slat (usec): min=7, max=968, avg=59.06, stdev=112.89 00:14:06.311 clat (msec): min=3, max=163, avg=12.82, stdev=15.77 00:14:06.311 lat (msec): min=3, max=163, avg=12.88, stdev=15.77 00:14:06.311 clat percentiles (msec): 00:14:06.311 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:14:06.311 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:14:06.311 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 18], 95.00th=[ 24], 00:14:06.311 | 99.00th=[ 68], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 163], 00:14:06.311 | 99.99th=[ 163] 00:14:06.311 write: IOPS=109, BW=13.7MiB/s (14.3MB/s)(120MiB/8746msec); 0 zone resets 00:14:06.311 slat (usec): min=36, max=9024, avg=127.75, stdev=331.98 00:14:06.311 clat (msec): min=8, max=276, avg=72.68, stdev=34.27 00:14:06.311 lat (msec): min=8, max=276, avg=72.81, stdev=34.26 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 53], 00:14:06.312 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:14:06.312 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 112], 95.00th=[ 133], 00:14:06.312 | 99.00th=[ 236], 99.50th=[ 257], 99.90th=[ 279], 99.95th=[ 279], 00:14:06.312 | 99.99th=[ 279] 00:14:06.312 bw ( KiB/s): min= 2052, max=21547, per=1.24%, avg=12147.65, stdev=5139.96, samples=20 00:14:06.312 iops : min= 16, max= 168, avg=94.75, stdev=40.23, samples=20 00:14:06.312 lat (msec) : 4=0.11%, 10=24.66%, 20=19.13%, 50=6.89%, 100=41.63% 00:14:06.312 lat (msec) : 250=7.29%, 500=0.28% 00:14:06.312 cpu : usr=0.67%, sys=0.34%, ctx=2886, majf=0, minf=1 00:14:06.312 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 issued rwts: total=800,956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.312 job57: (groupid=0, jobs=1): err= 0: pid=71579: Mon Jul 22 17:18:25 2024 00:14:06.312 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(100MiB/8878msec) 00:14:06.312 slat (usec): min=5, max=939, avg=53.49, stdev=103.86 00:14:06.312 clat (usec): min=6632, max=39015, avg=14651.67, stdev=5331.38 00:14:06.312 lat (usec): min=6654, max=39026, avg=14705.16, stdev=5325.56 00:14:06.312 clat percentiles (usec): 00:14:06.312 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10421], 00:14:06.312 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12911], 60.00th=[15008], 00:14:06.312 | 70.00th=[16057], 80.00th=[18482], 90.00th=[20055], 95.00th=[24773], 00:14:06.312 | 99.00th=[36439], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:14:06.312 | 99.99th=[39060] 00:14:06.312 write: IOPS=108, BW=13.6MiB/s (14.3MB/s)(117MiB/8562msec); 0 zone resets 00:14:06.312 slat (usec): min=35, max=1867, avg=125.16, stdev=170.28 00:14:06.312 clat (msec): min=26, max=245, avg=72.66, stdev=30.36 00:14:06.312 lat (msec): min=26, max=245, avg=72.78, stdev=30.37 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:14:06.312 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 70], 00:14:06.312 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 131], 00:14:06.312 | 99.00th=[ 203], 99.50th=[ 222], 99.90th=[ 245], 99.95th=[ 245], 00:14:06.312 | 99.99th=[ 245] 00:14:06.312 bw ( KiB/s): min= 512, max=19456, per=1.20%, avg=11826.50, stdev=5679.22, samples=20 00:14:06.312 iops : min= 4, max= 152, avg=92.05, stdev=44.54, samples=20 00:14:06.312 lat (msec) : 10=5.94%, 20=35.31%, 50=11.25%, 100=40.62%, 250=6.87% 00:14:06.312 cpu : usr=0.62%, sys=0.38%, ctx=2745, majf=0, minf=5 00:14:06.312 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 issued rwts: total=800,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.312 job58: (groupid=0, jobs=1): err= 0: pid=71586: Mon Jul 22 17:18:25 2024 00:14:06.312 read: IOPS=76, BW=9774KiB/s (10.0MB/s)(80.0MiB/8381msec) 00:14:06.312 slat (usec): min=5, max=1357, avg=58.35, stdev=115.21 00:14:06.312 clat (msec): min=4, max=224, avg=19.40, stdev=32.19 00:14:06.312 lat (msec): min=4, max=224, avg=19.46, stdev=32.20 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:14:06.312 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:14:06.312 | 70.00th=[ 13], 80.00th=[ 19], 90.00th=[ 37], 95.00th=[ 62], 00:14:06.312 | 99.00th=[ 218], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 226], 00:14:06.312 | 99.99th=[ 226] 00:14:06.312 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8471msec); 0 zone resets 00:14:06.312 slat (usec): min=30, max=1696, avg=129.04, stdev=168.79 00:14:06.312 clat (msec): min=44, max=238, avg=84.06, stdev=32.86 00:14:06.312 lat (msec): min=45, max=238, avg=84.18, stdev=32.87 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 56], 00:14:06.312 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 86], 00:14:06.312 | 70.00th=[ 96], 80.00th=[ 111], 90.00th=[ 131], 95.00th=[ 146], 00:14:06.312 | 99.00th=[ 184], 99.50th=[ 207], 99.90th=[ 239], 99.95th=[ 239], 00:14:06.312 | 99.99th=[ 239] 00:14:06.312 bw ( KiB/s): min= 766, max=17408, per=1.03%, avg=10137.05, stdev=4965.23, samples=20 00:14:06.312 iops : min= 5, max= 136, avg=79.00, stdev=38.81, samples=20 00:14:06.312 lat (msec) : 10=23.89%, 20=12.29%, 50=8.47%, 100=38.54%, 250=16.81% 00:14:06.312 cpu : usr=0.47%, sys=0.28%, ctx=2458, majf=0, minf=1 00:14:06.312 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.312 job59: (groupid=0, jobs=1): err= 0: pid=71587: Mon Jul 22 17:18:25 2024 00:14:06.312 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(100MiB/8506msec) 00:14:06.312 slat (usec): min=6, max=1550, avg=57.83, stdev=116.96 00:14:06.312 clat (usec): min=5654, max=52147, avg=13112.59, stdev=6467.60 00:14:06.312 lat (usec): min=5967, max=52169, avg=13170.42, stdev=6475.64 00:14:06.312 clat percentiles (usec): 00:14:06.312 | 1.00th=[ 6128], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8848], 00:14:06.312 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11469], 60.00th=[12387], 00:14:06.312 | 70.00th=[13566], 80.00th=[16450], 90.00th=[19530], 95.00th=[23462], 00:14:06.312 | 99.00th=[40633], 99.50th=[45876], 99.90th=[52167], 99.95th=[52167], 00:14:06.312 | 99.99th=[52167] 00:14:06.312 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(114MiB/8719msec); 0 zone resets 00:14:06.312 slat (usec): min=36, max=1387, avg=115.52, stdev=136.33 00:14:06.312 clat (msec): min=21, max=240, avg=75.64, stdev=32.88 00:14:06.312 lat (msec): min=21, max=240, avg=75.76, stdev=32.88 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:14:06.312 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 73], 00:14:06.312 | 70.00th=[ 80], 80.00th=[ 94], 90.00th=[ 120], 95.00th=[ 148], 00:14:06.312 | 99.00th=[ 203], 99.50th=[ 218], 99.90th=[ 241], 99.95th=[ 241], 00:14:06.312 | 99.99th=[ 241] 00:14:06.312 bw ( KiB/s): min= 2560, max=19238, per=1.18%, avg=11595.50, stdev=4935.88, samples=20 00:14:06.312 iops : min= 20, max= 150, avg=90.50, stdev=38.47, samples=20 00:14:06.312 lat (msec) : 10=16.93%, 20=25.86%, 50=10.16%, 100=38.00%, 250=9.05% 00:14:06.312 cpu : usr=0.67%, sys=0.29%, ctx=2830, majf=0, minf=3 00:14:06.312 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 issued rwts: total=800,913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.312 job60: (groupid=0, jobs=1): err= 0: pid=71588: Mon Jul 22 17:18:25 2024 00:14:06.312 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8479msec) 00:14:06.312 slat (usec): min=6, max=1091, avg=49.26, stdev=114.65 00:14:06.312 clat (msec): min=3, max=148, avg=13.22, stdev=14.82 00:14:06.312 lat (msec): min=3, max=148, avg=13.27, stdev=14.81 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:14:06.312 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 12], 00:14:06.312 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 21], 95.00th=[ 27], 00:14:06.312 | 99.00th=[ 83], 99.50th=[ 111], 99.90th=[ 148], 99.95th=[ 148], 00:14:06.312 | 99.99th=[ 148] 00:14:06.312 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(105MiB/8700msec); 0 zone resets 00:14:06.312 slat (usec): min=30, max=2378, avg=122.53, stdev=201.08 00:14:06.312 clat (msec): min=45, max=270, avg=82.31, stdev=36.31 00:14:06.312 lat (msec): min=45, max=270, avg=82.43, stdev=36.33 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:14:06.312 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 73], 60.00th=[ 84], 00:14:06.312 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 148], 00:14:06.312 | 99.00th=[ 236], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:14:06.312 | 99.99th=[ 271] 00:14:06.312 bw ( KiB/s): min= 3072, max=20224, per=1.08%, avg=10633.60, stdev=4373.15, samples=20 00:14:06.312 iops : min= 24, max= 158, avg=82.90, stdev=34.20, samples=20 00:14:06.312 lat (msec) : 4=0.79%, 10=22.89%, 20=20.27%, 50=10.32%, 100=32.23% 00:14:06.312 lat (msec) : 250=13.13%, 500=0.37% 00:14:06.312 cpu : usr=0.59%, sys=0.36%, ctx=2532, majf=0, minf=1 00:14:06.312 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.312 issued rwts: total=800,838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.312 job61: (groupid=0, jobs=1): err= 0: pid=71589: Mon Jul 22 17:18:25 2024 00:14:06.312 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(100MiB/8401msec) 00:14:06.312 slat (usec): min=5, max=1413, avg=59.95, stdev=120.95 00:14:06.312 clat (msec): min=3, max=178, avg=16.94, stdev=22.71 00:14:06.312 lat (msec): min=3, max=178, avg=17.00, stdev=22.71 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:14:06.312 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:14:06.312 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 28], 95.00th=[ 48], 00:14:06.312 | 99.00th=[ 150], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 180], 00:14:06.312 | 99.99th=[ 180] 00:14:06.312 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(101MiB/8339msec); 0 zone resets 00:14:06.312 slat (usec): min=37, max=3413, avg=144.21, stdev=252.24 00:14:06.312 clat (msec): min=31, max=246, avg=81.97, stdev=35.45 00:14:06.312 lat (msec): min=32, max=246, avg=82.11, stdev=35.46 00:14:06.312 clat percentiles (msec): 00:14:06.312 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:14:06.312 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 84], 00:14:06.312 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 128], 95.00th=[ 146], 00:14:06.313 | 99.00th=[ 213], 99.50th=[ 241], 99.90th=[ 247], 99.95th=[ 247], 00:14:06.313 | 99.99th=[ 247] 00:14:06.313 bw ( KiB/s): min= 3065, max=18650, per=1.04%, avg=10230.60, stdev=4756.35, samples=20 00:14:06.313 iops : min= 23, max= 145, avg=79.75, stdev=37.19, samples=20 00:14:06.313 lat (msec) : 4=0.19%, 10=22.64%, 20=18.35%, 50=11.88%, 100=32.77% 00:14:06.313 lat (msec) : 250=14.18% 00:14:06.313 cpu : usr=0.62%, sys=0.29%, ctx=2638, majf=0, minf=1 00:14:06.313 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.313 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.313 job62: (groupid=0, jobs=1): err= 0: pid=71590: Mon Jul 22 17:18:25 2024 00:14:06.313 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(105MiB/8782msec) 00:14:06.313 slat (usec): min=6, max=1520, avg=59.54, stdev=132.84 00:14:06.313 clat (usec): min=2382, max=89019, avg=14390.80, stdev=9375.55 00:14:06.313 lat (usec): min=3775, max=89041, avg=14450.34, stdev=9374.12 00:14:06.313 clat percentiles (usec): 00:14:06.313 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7963], 00:14:06.313 | 30.00th=[ 9372], 40.00th=[11469], 50.00th=[13304], 60.00th=[14484], 00:14:06.313 | 70.00th=[15401], 80.00th=[17433], 90.00th=[22938], 95.00th=[27395], 00:14:06.313 | 99.00th=[48497], 99.50th=[79168], 99.90th=[88605], 99.95th=[88605], 00:14:06.313 | 99.99th=[88605] 00:14:06.313 write: IOPS=113, BW=14.2MiB/s (14.8MB/s)(120MiB/8476msec); 0 zone resets 00:14:06.313 slat (usec): min=37, max=2702, avg=134.61, stdev=207.36 00:14:06.313 clat (msec): min=25, max=277, avg=69.82, stdev=31.95 00:14:06.313 lat (msec): min=25, max=277, avg=69.95, stdev=31.97 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:14:06.313 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 63], 00:14:06.313 | 70.00th=[ 70], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 138], 00:14:06.313 | 99.00th=[ 201], 99.50th=[ 226], 99.90th=[ 279], 99.95th=[ 279], 00:14:06.313 | 99.99th=[ 279] 00:14:06.313 bw ( KiB/s): min= 2816, max=19200, per=1.26%, avg=12360.42, stdev=5942.26, samples=19 00:14:06.313 iops : min= 22, max= 150, avg=96.32, stdev=46.39, samples=19 00:14:06.313 lat (msec) : 4=0.11%, 10=15.23%, 20=24.29%, 50=15.84%, 100=38.63% 00:14:06.313 lat (msec) : 250=5.67%, 500=0.22% 00:14:06.313 cpu : usr=0.69%, sys=0.33%, ctx=2917, majf=0, minf=3 00:14:06.313 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 issued rwts: total=839,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.313 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.313 job63: (groupid=0, jobs=1): err= 0: pid=71591: Mon Jul 22 17:18:25 2024 00:14:06.313 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8441msec) 00:14:06.313 slat (usec): min=5, max=1211, avg=65.66, stdev=128.80 00:14:06.313 clat (msec): min=3, max=160, avg=14.09, stdev=17.03 00:14:06.313 lat (msec): min=3, max=160, avg=14.15, stdev=17.03 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:14:06.313 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 12], 00:14:06.313 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 32], 00:14:06.313 | 99.00th=[ 74], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:14:06.313 | 99.99th=[ 161] 00:14:06.313 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(107MiB/8619msec); 0 zone resets 00:14:06.313 slat (usec): min=30, max=4268, avg=126.43, stdev=203.70 00:14:06.313 clat (msec): min=36, max=221, avg=79.89, stdev=32.45 00:14:06.313 lat (msec): min=36, max=221, avg=80.02, stdev=32.46 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 52], 00:14:06.313 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 81], 00:14:06.313 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 123], 95.00th=[ 146], 00:14:06.313 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 222], 00:14:06.313 | 99.99th=[ 222] 00:14:06.313 bw ( KiB/s): min= 2816, max=19417, per=1.10%, avg=10847.20, stdev=4819.68, samples=20 00:14:06.313 iops : min= 22, max= 151, avg=84.55, stdev=37.60, samples=20 00:14:06.313 lat (msec) : 4=0.06%, 10=23.75%, 20=18.55%, 50=12.57%, 100=32.75% 00:14:06.313 lat (msec) : 250=12.33% 00:14:06.313 cpu : usr=0.59%, sys=0.34%, ctx=2675, majf=0, minf=1 00:14:06.313 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 issued rwts: total=800,855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.313 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.313 job64: (groupid=0, jobs=1): err= 0: pid=71592: Mon Jul 22 17:18:25 2024 00:14:06.313 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8736msec) 00:14:06.313 slat (usec): min=6, max=1804, avg=66.71, stdev=146.27 00:14:06.313 clat (msec): min=5, max=142, avg=15.34, stdev=14.14 00:14:06.313 lat (msec): min=5, max=142, avg=15.40, stdev=14.14 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:14:06.313 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.313 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 34], 00:14:06.313 | 99.00th=[ 77], 99.50th=[ 127], 99.90th=[ 142], 99.95th=[ 142], 00:14:06.313 | 99.99th=[ 142] 00:14:06.313 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(117MiB/8482msec); 0 zone resets 00:14:06.313 slat (usec): min=36, max=1721, avg=111.80, stdev=135.49 00:14:06.313 clat (msec): min=44, max=271, avg=71.64, stdev=31.49 00:14:06.313 lat (msec): min=44, max=271, avg=71.75, stdev=31.50 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:14:06.313 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:14:06.313 | 70.00th=[ 73], 80.00th=[ 88], 90.00th=[ 111], 95.00th=[ 134], 00:14:06.313 | 99.00th=[ 197], 99.50th=[ 241], 99.90th=[ 271], 99.95th=[ 271], 00:14:06.313 | 99.99th=[ 271] 00:14:06.313 bw ( KiB/s): min= 2560, max=18981, per=1.21%, avg=11897.25, stdev=5534.46, samples=20 00:14:06.313 iops : min= 20, max= 148, avg=92.80, stdev=43.21, samples=20 00:14:06.313 lat (msec) : 10=13.59%, 20=26.83%, 50=13.70%, 100=37.94%, 250=7.77% 00:14:06.313 lat (msec) : 500=0.17% 00:14:06.313 cpu : usr=0.67%, sys=0.30%, ctx=2893, majf=0, minf=1 00:14:06.313 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 issued rwts: total=800,937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.313 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.313 job65: (groupid=0, jobs=1): err= 0: pid=71593: Mon Jul 22 17:18:25 2024 00:14:06.313 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8935msec) 00:14:06.313 slat (usec): min=6, max=1909, avg=55.04, stdev=136.63 00:14:06.313 clat (usec): min=3748, max=54324, avg=13621.68, stdev=6304.36 00:14:06.313 lat (usec): min=3786, max=54337, avg=13676.72, stdev=6307.28 00:14:06.313 clat percentiles (usec): 00:14:06.313 | 1.00th=[ 5866], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8979], 00:14:06.313 | 30.00th=[10028], 40.00th=[10945], 50.00th=[12125], 60.00th=[12911], 00:14:06.313 | 70.00th=[14484], 80.00th=[16909], 90.00th=[21890], 95.00th=[26084], 00:14:06.313 | 99.00th=[38536], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:14:06.313 | 99.99th=[54264] 00:14:06.313 write: IOPS=109, BW=13.7MiB/s (14.4MB/s)(119MiB/8673msec); 0 zone resets 00:14:06.313 slat (usec): min=31, max=1561, avg=127.46, stdev=150.77 00:14:06.313 clat (msec): min=9, max=255, avg=72.44, stdev=31.53 00:14:06.313 lat (msec): min=9, max=255, avg=72.57, stdev=31.56 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:14:06.313 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 69], 00:14:06.313 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 112], 95.00th=[ 136], 00:14:06.313 | 99.00th=[ 199], 99.50th=[ 220], 99.90th=[ 255], 99.95th=[ 255], 00:14:06.313 | 99.99th=[ 255] 00:14:06.313 bw ( KiB/s): min= 2048, max=19200, per=1.23%, avg=12068.05, stdev=5518.57, samples=20 00:14:06.313 iops : min= 16, max= 150, avg=94.20, stdev=43.12, samples=20 00:14:06.313 lat (msec) : 4=0.06%, 10=13.31%, 20=27.49%, 50=14.57%, 100=37.26% 00:14:06.313 lat (msec) : 250=7.26%, 500=0.06% 00:14:06.313 cpu : usr=0.69%, sys=0.34%, ctx=2826, majf=0, minf=5 00:14:06.313 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.313 issued rwts: total=800,950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.313 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.313 job66: (groupid=0, jobs=1): err= 0: pid=71594: Mon Jul 22 17:18:25 2024 00:14:06.313 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(120MiB/8892msec) 00:14:06.313 slat (usec): min=6, max=1474, avg=53.86, stdev=108.59 00:14:06.313 clat (usec): min=3830, max=48730, avg=13095.85, stdev=6383.97 00:14:06.313 lat (usec): min=3872, max=48738, avg=13149.70, stdev=6385.07 00:14:06.313 clat percentiles (usec): 00:14:06.313 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7832], 00:14:06.313 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11469], 60.00th=[13304], 00:14:06.313 | 70.00th=[15401], 80.00th=[17171], 90.00th=[22152], 95.00th=[25297], 00:14:06.313 | 99.00th=[33424], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:14:06.313 | 99.99th=[48497] 00:14:06.313 write: IOPS=114, BW=14.3MiB/s (15.0MB/s)(121MiB/8452msec); 0 zone resets 00:14:06.313 slat (usec): min=38, max=2030, avg=129.18, stdev=177.53 00:14:06.313 clat (msec): min=40, max=271, avg=69.14, stdev=29.83 00:14:06.313 lat (msec): min=41, max=271, avg=69.27, stdev=29.84 00:14:06.313 clat percentiles (msec): 00:14:06.313 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:14:06.313 | 30.00th=[ 52], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 62], 00:14:06.313 | 70.00th=[ 68], 80.00th=[ 88], 90.00th=[ 114], 95.00th=[ 129], 00:14:06.313 | 99.00th=[ 171], 99.50th=[ 192], 99.90th=[ 271], 99.95th=[ 271], 00:14:06.313 | 99.99th=[ 271] 00:14:06.314 bw ( KiB/s): min= 3840, max=19161, per=1.25%, avg=12294.70, stdev=5631.69, samples=20 00:14:06.314 iops : min= 30, max= 149, avg=95.85, stdev=43.97, samples=20 00:14:06.314 lat (msec) : 4=0.05%, 10=19.09%, 20=24.33%, 50=19.04%, 100=29.77% 00:14:06.314 lat (msec) : 250=7.62%, 500=0.10% 00:14:06.314 cpu : usr=0.76%, sys=0.30%, ctx=3184, majf=0, minf=3 00:14:06.314 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.314 job67: (groupid=0, jobs=1): err= 0: pid=71595: Mon Jul 22 17:18:25 2024 00:14:06.314 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(100MiB/8897msec) 00:14:06.314 slat (usec): min=5, max=1665, avg=68.47, stdev=147.96 00:14:06.314 clat (usec): min=6856, max=63098, avg=16841.49, stdev=7859.80 00:14:06.314 lat (usec): min=6888, max=63111, avg=16909.96, stdev=7858.96 00:14:06.314 clat percentiles (usec): 00:14:06.314 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11469], 00:14:06.314 | 30.00th=[12518], 40.00th=[13566], 50.00th=[14615], 60.00th=[15926], 00:14:06.314 | 70.00th=[17695], 80.00th=[20579], 90.00th=[25822], 95.00th=[34341], 00:14:06.314 | 99.00th=[48497], 99.50th=[52691], 99.90th=[63177], 99.95th=[63177], 00:14:06.314 | 99.99th=[63177] 00:14:06.314 write: IOPS=111, BW=14.0MiB/s (14.6MB/s)(117MiB/8351msec); 0 zone resets 00:14:06.314 slat (usec): min=31, max=1277, avg=113.93, stdev=130.31 00:14:06.314 clat (msec): min=19, max=210, avg=70.87, stdev=27.96 00:14:06.314 lat (msec): min=19, max=210, avg=70.98, stdev=27.97 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 51], 00:14:06.314 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:14:06.314 | 70.00th=[ 78], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 130], 00:14:06.314 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 211], 99.95th=[ 211], 00:14:06.314 | 99.99th=[ 211] 00:14:06.314 bw ( KiB/s): min= 1792, max=18688, per=1.20%, avg=11825.95, stdev=5681.71, samples=20 00:14:06.314 iops : min= 14, max= 146, avg=92.30, stdev=44.47, samples=20 00:14:06.314 lat (msec) : 10=4.33%, 20=31.62%, 50=20.08%, 100=37.16%, 250=6.81% 00:14:06.314 cpu : usr=0.62%, sys=0.37%, ctx=2813, majf=0, minf=3 00:14:06.314 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 issued rwts: total=800,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.314 job68: (groupid=0, jobs=1): err= 0: pid=71596: Mon Jul 22 17:18:25 2024 00:14:06.314 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(100MiB/8188msec) 00:14:06.314 slat (usec): min=6, max=1105, avg=58.19, stdev=116.24 00:14:06.314 clat (msec): min=2, max=126, avg=15.65, stdev=17.15 00:14:06.314 lat (msec): min=2, max=127, avg=15.71, stdev=17.15 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:14:06.314 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:14:06.314 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 25], 95.00th=[ 39], 00:14:06.314 | 99.00th=[ 107], 99.50th=[ 114], 99.90th=[ 128], 99.95th=[ 128], 00:14:06.314 | 99.99th=[ 128] 00:14:06.314 write: IOPS=95, BW=12.0MiB/s (12.5MB/s)(101MiB/8443msec); 0 zone resets 00:14:06.314 slat (usec): min=37, max=2938, avg=143.75, stdev=223.72 00:14:06.314 clat (msec): min=43, max=224, avg=83.01, stdev=36.60 00:14:06.314 lat (msec): min=43, max=224, avg=83.15, stdev=36.61 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:14:06.314 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 81], 00:14:06.314 | 70.00th=[ 97], 80.00th=[ 115], 90.00th=[ 136], 95.00th=[ 153], 00:14:06.314 | 99.00th=[ 222], 99.50th=[ 226], 99.90th=[ 226], 99.95th=[ 226], 00:14:06.314 | 99.99th=[ 226] 00:14:06.314 bw ( KiB/s): min= 1792, max=18432, per=1.04%, avg=10247.60, stdev=5298.62, samples=20 00:14:06.314 iops : min= 14, max= 144, avg=79.90, stdev=41.39, samples=20 00:14:06.314 lat (msec) : 4=0.06%, 10=19.65%, 20=22.76%, 50=10.95%, 100=31.78% 00:14:06.314 lat (msec) : 250=14.80% 00:14:06.314 cpu : usr=0.65%, sys=0.27%, ctx=2613, majf=0, minf=3 00:14:06.314 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.314 job69: (groupid=0, jobs=1): err= 0: pid=71597: Mon Jul 22 17:18:25 2024 00:14:06.314 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(100MiB/9094msec) 00:14:06.314 slat (usec): min=6, max=834, avg=45.31, stdev=78.96 00:14:06.314 clat (usec): min=2915, max=74781, avg=12775.18, stdev=7480.02 00:14:06.314 lat (usec): min=3013, max=74788, avg=12820.49, stdev=7478.41 00:14:06.314 clat percentiles (usec): 00:14:06.314 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 8160], 00:14:06.314 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11731], 60.00th=[12649], 00:14:06.314 | 70.00th=[13698], 80.00th=[14615], 90.00th=[18744], 95.00th=[22414], 00:14:06.314 | 99.00th=[37487], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:14:06.314 | 99.99th=[74974] 00:14:06.314 write: IOPS=108, BW=13.6MiB/s (14.2MB/s)(119MiB/8783msec); 0 zone resets 00:14:06.314 slat (usec): min=37, max=7284, avg=139.90, stdev=320.98 00:14:06.314 clat (msec): min=4, max=289, avg=72.93, stdev=36.30 00:14:06.314 lat (msec): min=4, max=289, avg=73.07, stdev=36.35 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 52], 00:14:06.314 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 68], 00:14:06.314 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 121], 95.00th=[ 140], 00:14:06.314 | 99.00th=[ 213], 99.50th=[ 249], 99.90th=[ 288], 99.95th=[ 288], 00:14:06.314 | 99.99th=[ 288] 00:14:06.314 bw ( KiB/s): min= 1280, max=24625, per=1.23%, avg=12118.90, stdev=6089.26, samples=20 00:14:06.314 iops : min= 10, max= 192, avg=94.45, stdev=47.63, samples=20 00:14:06.314 lat (msec) : 4=0.06%, 10=17.84%, 20=26.17%, 50=9.92%, 100=37.69% 00:14:06.314 lat (msec) : 250=8.15%, 500=0.17% 00:14:06.314 cpu : usr=0.76%, sys=0.24%, ctx=2756, majf=0, minf=3 00:14:06.314 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 issued rwts: total=800,954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.314 job70: (groupid=0, jobs=1): err= 0: pid=71598: Mon Jul 22 17:18:25 2024 00:14:06.314 read: IOPS=56, BW=7259KiB/s (7433kB/s)(50.5MiB/7124msec) 00:14:06.314 slat (usec): min=6, max=1822, avg=64.45, stdev=156.83 00:14:06.314 clat (msec): min=4, max=468, avg=46.90, stdev=84.05 00:14:06.314 lat (msec): min=4, max=468, avg=46.96, stdev=84.07 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:14:06.314 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 23], 00:14:06.314 | 70.00th=[ 28], 80.00th=[ 37], 90.00th=[ 105], 95.00th=[ 300], 00:14:06.314 | 99.00th=[ 456], 99.50th=[ 464], 99.90th=[ 468], 99.95th=[ 468], 00:14:06.314 | 99.99th=[ 468] 00:14:06.314 write: IOPS=62, BW=8053KiB/s (8247kB/s)(60.0MiB/7629msec); 0 zone resets 00:14:06.314 slat (usec): min=30, max=1504, avg=139.24, stdev=162.89 00:14:06.314 clat (msec): min=70, max=334, avg=126.28, stdev=51.34 00:14:06.314 lat (msec): min=70, max=334, avg=126.42, stdev=51.34 00:14:06.314 clat percentiles (msec): 00:14:06.314 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 82], 00:14:06.314 | 30.00th=[ 90], 40.00th=[ 101], 50.00th=[ 109], 60.00th=[ 128], 00:14:06.314 | 70.00th=[ 144], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 230], 00:14:06.314 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:14:06.314 | 99.99th=[ 334] 00:14:06.314 bw ( KiB/s): min= 1024, max=11752, per=0.76%, avg=7486.94, stdev=3130.66, samples=16 00:14:06.314 iops : min= 8, max= 91, avg=58.38, stdev=24.43, samples=16 00:14:06.314 lat (msec) : 10=1.81%, 20=22.17%, 50=14.59%, 100=24.10%, 250=32.58% 00:14:06.314 lat (msec) : 500=4.75% 00:14:06.314 cpu : usr=0.30%, sys=0.20%, ctx=1567, majf=0, minf=8 00:14:06.314 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.314 issued rwts: total=404,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.314 job71: (groupid=0, jobs=1): err= 0: pid=71599: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=54, BW=6964KiB/s (7131kB/s)(60.0MiB/8823msec) 00:14:06.315 slat (usec): min=7, max=1375, avg=54.48, stdev=117.69 00:14:06.315 clat (msec): min=6, max=211, avg=25.17, stdev=34.44 00:14:06.315 lat (msec): min=6, max=211, avg=25.23, stdev=34.45 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:14:06.315 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:14:06.315 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 83], 00:14:06.315 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 211], 00:14:06.315 | 99.99th=[ 211] 00:14:06.315 write: IOPS=74, BW=9547KiB/s (9776kB/s)(80.0MiB/8581msec); 0 zone resets 00:14:06.315 slat (usec): min=37, max=3631, avg=136.46, stdev=224.26 00:14:06.315 clat (msec): min=2, max=367, avg=106.31, stdev=55.30 00:14:06.315 lat (msec): min=2, max=367, avg=106.45, stdev=55.31 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 71], 20.00th=[ 73], 00:14:06.315 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 101], 00:14:06.315 | 70.00th=[ 122], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 207], 00:14:06.315 | 99.00th=[ 313], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 368], 00:14:06.315 | 99.99th=[ 368] 00:14:06.315 bw ( KiB/s): min= 2048, max=21248, per=0.87%, avg=8525.63, stdev=4797.54, samples=19 00:14:06.315 iops : min= 16, max= 166, avg=66.42, stdev=37.48, samples=19 00:14:06.315 lat (msec) : 4=0.62%, 10=5.00%, 20=28.57%, 50=8.57%, 100=32.77% 00:14:06.315 lat (msec) : 250=22.59%, 500=1.88% 00:14:06.315 cpu : usr=0.45%, sys=0.23%, ctx=1755, majf=0, minf=1 00:14:06.315 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.315 job72: (groupid=0, jobs=1): err= 0: pid=71600: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=63, BW=8103KiB/s (8298kB/s)(60.0MiB/7582msec) 00:14:06.315 slat (usec): min=5, max=2509, avg=74.92, stdev=189.46 00:14:06.315 clat (usec): min=5256, max=68167, avg=18131.48, stdev=9789.55 00:14:06.315 lat (usec): min=5464, max=68220, avg=18206.40, stdev=9781.58 00:14:06.315 clat percentiles (usec): 00:14:06.315 | 1.00th=[ 6194], 5.00th=[ 7963], 10.00th=[10552], 20.00th=[11731], 00:14:06.315 | 30.00th=[12911], 40.00th=[14353], 50.00th=[16319], 60.00th=[17433], 00:14:06.315 | 70.00th=[18744], 80.00th=[21365], 90.00th=[28181], 95.00th=[40109], 00:14:06.315 | 99.00th=[58983], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:14:06.315 | 99.99th=[68682] 00:14:06.315 write: IOPS=67, BW=8623KiB/s (8830kB/s)(75.4MiB/8951msec); 0 zone resets 00:14:06.315 slat (usec): min=36, max=1934, avg=138.59, stdev=200.06 00:14:06.315 clat (msec): min=61, max=432, avg=117.76, stdev=53.59 00:14:06.315 lat (msec): min=61, max=432, avg=117.90, stdev=53.59 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 81], 00:14:06.315 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 105], 00:14:06.315 | 70.00th=[ 127], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 226], 00:14:06.315 | 99.00th=[ 309], 99.50th=[ 359], 99.90th=[ 435], 99.95th=[ 435], 00:14:06.315 | 99.99th=[ 435] 00:14:06.315 bw ( KiB/s): min= 1792, max=13056, per=0.78%, avg=7624.35, stdev=3645.55, samples=20 00:14:06.315 iops : min= 14, max= 102, avg=59.30, stdev=28.47, samples=20 00:14:06.315 lat (msec) : 10=3.23%, 20=29.92%, 50=9.97%, 100=31.86%, 250=23.08% 00:14:06.315 lat (msec) : 500=1.94% 00:14:06.315 cpu : usr=0.45%, sys=0.18%, ctx=1826, majf=0, minf=3 00:14:06.315 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 issued rwts: total=480,603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.315 job73: (groupid=0, jobs=1): err= 0: pid=71601: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=58, BW=7458KiB/s (7637kB/s)(60.0MiB/8238msec) 00:14:06.315 slat (usec): min=6, max=767, avg=49.61, stdev=90.64 00:14:06.315 clat (usec): min=8085, max=43947, avg=22869.08, stdev=8181.73 00:14:06.315 lat (usec): min=8853, max=43957, avg=22918.69, stdev=8181.53 00:14:06.315 clat percentiles (usec): 00:14:06.315 | 1.00th=[10159], 5.00th=[10814], 10.00th=[12649], 20.00th=[14353], 00:14:06.315 | 30.00th=[15664], 40.00th=[20841], 50.00th=[23987], 60.00th=[25560], 00:14:06.315 | 70.00th=[26870], 80.00th=[28967], 90.00th=[33817], 95.00th=[37487], 00:14:06.315 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:14:06.315 | 99.99th=[43779] 00:14:06.315 write: IOPS=70, BW=8989KiB/s (9205kB/s)(76.1MiB/8672msec); 0 zone resets 00:14:06.315 slat (usec): min=35, max=3252, avg=152.34, stdev=236.56 00:14:06.315 clat (msec): min=56, max=304, avg=112.89, stdev=49.08 00:14:06.315 lat (msec): min=56, max=304, avg=113.04, stdev=49.07 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 64], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:14:06.315 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 102], 00:14:06.315 | 70.00th=[ 120], 80.00th=[ 144], 90.00th=[ 194], 95.00th=[ 226], 00:14:06.315 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 305], 00:14:06.315 | 99.99th=[ 305] 00:14:06.315 bw ( KiB/s): min= 2048, max=13082, per=0.87%, avg=8541.22, stdev=3306.65, samples=18 00:14:06.315 iops : min= 16, max= 102, avg=66.56, stdev=25.87, samples=18 00:14:06.315 lat (msec) : 10=0.18%, 20=16.44%, 50=27.46%, 100=32.97%, 250=21.40% 00:14:06.315 lat (msec) : 500=1.56% 00:14:06.315 cpu : usr=0.36%, sys=0.26%, ctx=1813, majf=0, minf=3 00:14:06.315 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 issued rwts: total=480,609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.315 job74: (groupid=0, jobs=1): err= 0: pid=71606: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=58, BW=7537KiB/s (7718kB/s)(60.0MiB/8152msec) 00:14:06.315 slat (usec): min=6, max=2181, avg=59.32, stdev=131.14 00:14:06.315 clat (usec): min=10244, max=96277, avg=25003.69, stdev=12225.71 00:14:06.315 lat (usec): min=10713, max=96319, avg=25063.01, stdev=12217.89 00:14:06.315 clat percentiles (usec): 00:14:06.315 | 1.00th=[11207], 5.00th=[13173], 10.00th=[14615], 20.00th=[16909], 00:14:06.315 | 30.00th=[18220], 40.00th=[20317], 50.00th=[22414], 60.00th=[24511], 00:14:06.315 | 70.00th=[27132], 80.00th=[30278], 90.00th=[35914], 95.00th=[43254], 00:14:06.315 | 99.00th=[88605], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:14:06.315 | 99.99th=[95945] 00:14:06.315 write: IOPS=73, BW=9401KiB/s (9627kB/s)(78.6MiB/8564msec); 0 zone resets 00:14:06.315 slat (usec): min=37, max=2818, avg=136.09, stdev=201.84 00:14:06.315 clat (msec): min=16, max=409, avg=107.74, stdev=49.81 00:14:06.315 lat (msec): min=16, max=409, avg=107.88, stdev=49.84 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 23], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 74], 00:14:06.315 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 91], 60.00th=[ 102], 00:14:06.315 | 70.00th=[ 114], 80.00th=[ 132], 90.00th=[ 169], 95.00th=[ 199], 00:14:06.315 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 409], 99.95th=[ 409], 00:14:06.315 | 99.99th=[ 409] 00:14:06.315 bw ( KiB/s): min= 512, max=13312, per=0.85%, avg=8379.53, stdev=3987.94, samples=19 00:14:06.315 iops : min= 4, max= 104, avg=65.37, stdev=31.27, samples=19 00:14:06.315 lat (msec) : 20=16.32%, 50=26.42%, 100=33.99%, 250=22.00%, 500=1.26% 00:14:06.315 cpu : usr=0.38%, sys=0.27%, ctx=1859, majf=0, minf=3 00:14:06.315 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 issued rwts: total=480,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.315 job75: (groupid=0, jobs=1): err= 0: pid=71607: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=62, BW=7950KiB/s (8141kB/s)(60.0MiB/7728msec) 00:14:06.315 slat (usec): min=6, max=1326, avg=54.18, stdev=116.15 00:14:06.315 clat (usec): min=6918, max=90323, avg=21429.72, stdev=12381.16 00:14:06.315 lat (usec): min=7031, max=90336, avg=21483.90, stdev=12385.05 00:14:06.315 clat percentiles (usec): 00:14:06.315 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11863], 20.00th=[14222], 00:14:06.315 | 30.00th=[15664], 40.00th=[17433], 50.00th=[18482], 60.00th=[19530], 00:14:06.315 | 70.00th=[21103], 80.00th=[24249], 90.00th=[34341], 95.00th=[47973], 00:14:06.315 | 99.00th=[79168], 99.50th=[81265], 99.90th=[90702], 99.95th=[90702], 00:14:06.315 | 99.99th=[90702] 00:14:06.315 write: IOPS=64, BW=8259KiB/s (8458kB/s)(70.6MiB/8756msec); 0 zone resets 00:14:06.315 slat (usec): min=37, max=1226, avg=127.56, stdev=154.18 00:14:06.315 clat (msec): min=65, max=303, avg=122.97, stdev=52.04 00:14:06.315 lat (msec): min=65, max=303, avg=123.09, stdev=52.04 00:14:06.315 clat percentiles (msec): 00:14:06.315 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 81], 00:14:06.315 | 30.00th=[ 88], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 120], 00:14:06.315 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 192], 95.00th=[ 236], 00:14:06.315 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:14:06.315 | 99.99th=[ 305] 00:14:06.315 bw ( KiB/s): min= 2048, max=13824, per=0.77%, avg=7512.42, stdev=3150.50, samples=19 00:14:06.315 iops : min= 16, max= 108, avg=58.53, stdev=24.71, samples=19 00:14:06.315 lat (msec) : 10=1.34%, 20=28.23%, 50=14.07%, 100=27.46%, 250=26.41% 00:14:06.315 lat (msec) : 500=2.49% 00:14:06.315 cpu : usr=0.41%, sys=0.21%, ctx=1728, majf=0, minf=5 00:14:06.315 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.315 issued rwts: total=480,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.315 job76: (groupid=0, jobs=1): err= 0: pid=71608: Mon Jul 22 17:18:25 2024 00:14:06.315 read: IOPS=57, BW=7361KiB/s (7537kB/s)(60.0MiB/8347msec) 00:14:06.315 slat (usec): min=6, max=1069, avg=55.27, stdev=102.01 00:14:06.315 clat (usec): min=10707, max=45534, avg=20325.73, stdev=7078.82 00:14:06.316 lat (usec): min=10792, max=45544, avg=20381.00, stdev=7075.00 00:14:06.316 clat percentiles (usec): 00:14:06.316 | 1.00th=[11076], 5.00th=[11994], 10.00th=[13304], 20.00th=[14353], 00:14:06.316 | 30.00th=[15139], 40.00th=[16319], 50.00th=[17695], 60.00th=[19792], 00:14:06.316 | 70.00th=[24773], 80.00th=[26608], 90.00th=[29492], 95.00th=[31851], 00:14:06.316 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:14:06.316 | 99.99th=[45351] 00:14:06.316 write: IOPS=71, BW=9203KiB/s (9424kB/s)(79.2MiB/8818msec); 0 zone resets 00:14:06.316 slat (usec): min=35, max=3446, avg=135.35, stdev=213.03 00:14:06.316 clat (msec): min=48, max=318, avg=110.37, stdev=44.45 00:14:06.316 lat (msec): min=48, max=318, avg=110.50, stdev=44.45 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 55], 5.00th=[ 72], 10.00th=[ 72], 20.00th=[ 74], 00:14:06.316 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 106], 00:14:06.316 | 70.00th=[ 123], 80.00th=[ 146], 90.00th=[ 176], 95.00th=[ 203], 00:14:06.316 | 99.00th=[ 243], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 321], 00:14:06.316 | 99.99th=[ 321] 00:14:06.316 bw ( KiB/s): min= 1792, max=13851, per=0.82%, avg=8006.40, stdev=3834.14, samples=20 00:14:06.316 iops : min= 14, max= 108, avg=62.35, stdev=30.00, samples=20 00:14:06.316 lat (msec) : 20=25.85%, 50=17.41%, 100=31.87%, 250=24.42%, 500=0.45% 00:14:06.316 cpu : usr=0.45%, sys=0.21%, ctx=1814, majf=0, minf=3 00:14:06.316 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 issued rwts: total=480,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.316 job77: (groupid=0, jobs=1): err= 0: pid=71609: Mon Jul 22 17:18:25 2024 00:14:06.316 read: IOPS=59, BW=7654KiB/s (7838kB/s)(60.0MiB/8027msec) 00:14:06.316 slat (usec): min=7, max=1616, avg=65.65, stdev=134.19 00:14:06.316 clat (msec): min=10, max=105, avg=25.02, stdev=15.75 00:14:06.316 lat (msec): min=11, max=105, avg=25.09, stdev=15.75 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:14:06.316 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 23], 00:14:06.316 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 39], 95.00th=[ 49], 00:14:06.316 | 99.00th=[ 104], 99.50th=[ 105], 99.90th=[ 106], 99.95th=[ 106], 00:14:06.316 | 99.99th=[ 106] 00:14:06.316 write: IOPS=74, BW=9575KiB/s (9804kB/s)(80.0MiB/8556msec); 0 zone resets 00:14:06.316 slat (usec): min=40, max=1631, avg=128.99, stdev=166.15 00:14:06.316 clat (msec): min=12, max=414, avg=105.74, stdev=48.27 00:14:06.316 lat (msec): min=12, max=414, avg=105.87, stdev=48.26 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 15], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:14:06.316 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 100], 00:14:06.316 | 70.00th=[ 111], 80.00th=[ 130], 90.00th=[ 161], 95.00th=[ 197], 00:14:06.316 | 99.00th=[ 296], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 414], 00:14:06.316 | 99.99th=[ 414] 00:14:06.316 bw ( KiB/s): min= 1792, max=16128, per=0.87%, avg=8525.68, stdev=3803.96, samples=19 00:14:06.316 iops : min= 14, max= 126, avg=66.53, stdev=29.72, samples=19 00:14:06.316 lat (msec) : 20=20.36%, 50=21.79%, 100=34.73%, 250=21.79%, 500=1.34% 00:14:06.316 cpu : usr=0.44%, sys=0.24%, ctx=1824, majf=0, minf=3 00:14:06.316 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.316 job78: (groupid=0, jobs=1): err= 0: pid=71611: Mon Jul 22 17:18:25 2024 00:14:06.316 read: IOPS=59, BW=7566KiB/s (7747kB/s)(60.0MiB/8121msec) 00:14:06.316 slat (usec): min=6, max=764, avg=60.04, stdev=103.71 00:14:06.316 clat (msec): min=7, max=113, avg=21.85, stdev=14.12 00:14:06.316 lat (msec): min=7, max=113, avg=21.91, stdev=14.13 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:14:06.316 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:14:06.316 | 70.00th=[ 25], 80.00th=[ 29], 90.00th=[ 37], 95.00th=[ 50], 00:14:06.316 | 99.00th=[ 89], 99.50th=[ 106], 99.90th=[ 114], 99.95th=[ 114], 00:14:06.316 | 99.99th=[ 114] 00:14:06.316 write: IOPS=68, BW=8780KiB/s (8990kB/s)(74.9MiB/8733msec); 0 zone resets 00:14:06.316 slat (usec): min=36, max=1943, avg=142.71, stdev=203.93 00:14:06.316 clat (msec): min=31, max=389, avg=115.59, stdev=51.58 00:14:06.316 lat (msec): min=31, max=389, avg=115.74, stdev=51.59 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 37], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:14:06.316 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 109], 00:14:06.316 | 70.00th=[ 125], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 209], 00:14:06.316 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:14:06.316 | 99.99th=[ 388] 00:14:06.316 bw ( KiB/s): min= 2043, max=13568, per=0.81%, avg=7960.84, stdev=3310.42, samples=19 00:14:06.316 iops : min= 15, max= 106, avg=62.11, stdev=25.96, samples=19 00:14:06.316 lat (msec) : 10=4.73%, 20=20.76%, 50=17.89%, 100=28.82%, 250=26.14% 00:14:06.316 lat (msec) : 500=1.67% 00:14:06.316 cpu : usr=0.46%, sys=0.21%, ctx=1773, majf=0, minf=3 00:14:06.316 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 issued rwts: total=480,599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.316 job79: (groupid=0, jobs=1): err= 0: pid=71612: Mon Jul 22 17:18:25 2024 00:14:06.316 read: IOPS=60, BW=7704KiB/s (7889kB/s)(61.5MiB/8174msec) 00:14:06.316 slat (usec): min=6, max=1049, avg=53.31, stdev=94.74 00:14:06.316 clat (msec): min=9, max=102, avg=22.12, stdev=11.80 00:14:06.316 lat (msec): min=9, max=102, avg=22.18, stdev=11.80 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:14:06.316 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 25], 00:14:06.316 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 32], 95.00th=[ 42], 00:14:06.316 | 99.00th=[ 90], 99.50th=[ 90], 99.90th=[ 103], 99.95th=[ 103], 00:14:06.316 | 99.99th=[ 103] 00:14:06.316 write: IOPS=74, BW=9476KiB/s (9703kB/s)(80.0MiB/8645msec); 0 zone resets 00:14:06.316 slat (usec): min=36, max=3400, avg=122.18, stdev=190.49 00:14:06.316 clat (msec): min=39, max=450, avg=106.80, stdev=50.64 00:14:06.316 lat (msec): min=39, max=450, avg=106.92, stdev=50.64 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 47], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.316 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 96], 00:14:06.316 | 70.00th=[ 111], 80.00th=[ 130], 90.00th=[ 159], 95.00th=[ 207], 00:14:06.316 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 451], 99.95th=[ 451], 00:14:06.316 | 99.99th=[ 451] 00:14:06.316 bw ( KiB/s): min= 1788, max=13568, per=0.91%, avg=8885.67, stdev=3528.53, samples=18 00:14:06.316 iops : min= 13, max= 106, avg=69.17, stdev=27.67, samples=18 00:14:06.316 lat (msec) : 10=0.44%, 20=18.20%, 50=24.47%, 100=35.69%, 250=19.88% 00:14:06.316 lat (msec) : 500=1.33% 00:14:06.316 cpu : usr=0.37%, sys=0.25%, ctx=1845, majf=0, minf=9 00:14:06.316 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 issued rwts: total=492,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.316 job80: (groupid=0, jobs=1): err= 0: pid=71613: Mon Jul 22 17:18:25 2024 00:14:06.316 read: IOPS=77, BW=9885KiB/s (10.1MB/s)(80.0MiB/8287msec) 00:14:06.316 slat (usec): min=6, max=1561, avg=65.59, stdev=123.33 00:14:06.316 clat (usec): min=7461, max=53452, avg=19454.54, stdev=7902.27 00:14:06.316 lat (usec): min=7653, max=53480, avg=19520.13, stdev=7900.45 00:14:06.316 clat percentiles (usec): 00:14:06.316 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[11731], 20.00th=[12780], 00:14:06.316 | 30.00th=[13829], 40.00th=[15795], 50.00th=[17433], 60.00th=[19006], 00:14:06.316 | 70.00th=[21890], 80.00th=[24773], 90.00th=[30540], 95.00th=[35914], 00:14:06.316 | 99.00th=[45876], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:14:06.316 | 99.99th=[53216] 00:14:06.316 write: IOPS=76, BW=9850KiB/s (10.1MB/s)(81.5MiB/8473msec); 0 zone resets 00:14:06.316 slat (usec): min=35, max=2386, avg=129.41, stdev=195.92 00:14:06.316 clat (msec): min=57, max=407, avg=103.11, stdev=42.30 00:14:06.316 lat (msec): min=57, max=407, avg=103.24, stdev=42.30 00:14:06.316 clat percentiles (msec): 00:14:06.316 | 1.00th=[ 64], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 73], 00:14:06.316 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 101], 00:14:06.316 | 70.00th=[ 111], 80.00th=[ 126], 90.00th=[ 148], 95.00th=[ 176], 00:14:06.316 | 99.00th=[ 284], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:14:06.316 | 99.99th=[ 409] 00:14:06.316 bw ( KiB/s): min= 512, max=13568, per=0.84%, avg=8251.30, stdev=4107.11, samples=20 00:14:06.316 iops : min= 4, max= 106, avg=64.25, stdev=32.26, samples=20 00:14:06.316 lat (msec) : 10=0.70%, 20=30.65%, 50=18.11%, 100=30.19%, 250=19.74% 00:14:06.316 lat (msec) : 500=0.62% 00:14:06.316 cpu : usr=0.45%, sys=0.24%, ctx=2181, majf=0, minf=3 00:14:06.316 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.316 issued rwts: total=640,652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.316 job81: (groupid=0, jobs=1): err= 0: pid=71614: Mon Jul 22 17:18:25 2024 00:14:06.316 read: IOPS=74, BW=9479KiB/s (9707kB/s)(80.0MiB/8642msec) 00:14:06.316 slat (usec): min=6, max=953, avg=53.33, stdev=95.34 00:14:06.316 clat (msec): min=5, max=154, avg=17.31, stdev=17.11 00:14:06.316 lat (msec): min=6, max=154, avg=17.36, stdev=17.11 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:14:06.317 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:14:06.317 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 31], 95.00th=[ 38], 00:14:06.317 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:14:06.317 | 99.99th=[ 155] 00:14:06.317 write: IOPS=76, BW=9783KiB/s (10.0MB/s)(82.9MiB/8675msec); 0 zone resets 00:14:06.317 slat (usec): min=37, max=2183, avg=125.87, stdev=197.55 00:14:06.317 clat (msec): min=6, max=327, avg=103.86, stdev=44.12 00:14:06.317 lat (msec): min=6, max=328, avg=103.99, stdev=44.12 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 12], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:14:06.317 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 103], 00:14:06.317 | 70.00th=[ 118], 80.00th=[ 136], 90.00th=[ 161], 95.00th=[ 192], 00:14:06.317 | 99.00th=[ 245], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:14:06.317 | 99.99th=[ 330] 00:14:06.317 bw ( KiB/s): min= 2048, max=15903, per=0.85%, avg=8383.55, stdev=4265.31, samples=20 00:14:06.317 iops : min= 16, max= 124, avg=65.40, stdev=33.36, samples=20 00:14:06.317 lat (msec) : 10=11.44%, 20=28.01%, 50=9.98%, 100=28.86%, 250=21.34% 00:14:06.317 lat (msec) : 500=0.38% 00:14:06.317 cpu : usr=0.55%, sys=0.22%, ctx=2030, majf=0, minf=7 00:14:06.317 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 issued rwts: total=640,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.317 job82: (groupid=0, jobs=1): err= 0: pid=71615: Mon Jul 22 17:18:25 2024 00:14:06.317 read: IOPS=76, BW=9813KiB/s (10.0MB/s)(80.0MiB/8348msec) 00:14:06.317 slat (usec): min=6, max=1241, avg=57.31, stdev=114.14 00:14:06.317 clat (usec): min=7199, max=66262, avg=19961.29, stdev=8177.63 00:14:06.317 lat (usec): min=7675, max=66349, avg=20018.61, stdev=8174.84 00:14:06.317 clat percentiles (usec): 00:14:06.317 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[12125], 20.00th=[13698], 00:14:06.317 | 30.00th=[15139], 40.00th=[16057], 50.00th=[17695], 60.00th=[19268], 00:14:06.317 | 70.00th=[22414], 80.00th=[25297], 90.00th=[30802], 95.00th=[35914], 00:14:06.317 | 99.00th=[45876], 99.50th=[46924], 99.90th=[66323], 99.95th=[66323], 00:14:06.317 | 99.99th=[66323] 00:14:06.317 write: IOPS=76, BW=9816KiB/s (10.1MB/s)(81.0MiB/8450msec); 0 zone resets 00:14:06.317 slat (usec): min=37, max=1708, avg=133.95, stdev=181.09 00:14:06.317 clat (msec): min=39, max=282, avg=103.30, stdev=36.39 00:14:06.317 lat (msec): min=39, max=282, avg=103.43, stdev=36.40 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 47], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 74], 00:14:06.317 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 104], 00:14:06.317 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 150], 95.00th=[ 176], 00:14:06.317 | 99.00th=[ 232], 99.50th=[ 253], 99.90th=[ 284], 99.95th=[ 284], 00:14:06.317 | 99.99th=[ 284] 00:14:06.317 bw ( KiB/s): min= 512, max=14336, per=0.84%, avg=8200.30, stdev=4197.09, samples=20 00:14:06.317 iops : min= 4, max= 112, avg=63.75, stdev=32.93, samples=20 00:14:06.317 lat (msec) : 10=1.63%, 20=29.11%, 50=19.41%, 100=28.34%, 250=21.20% 00:14:06.317 lat (msec) : 500=0.31% 00:14:06.317 cpu : usr=0.48%, sys=0.31%, ctx=2012, majf=0, minf=7 00:14:06.317 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.317 job83: (groupid=0, jobs=1): err= 0: pid=71617: Mon Jul 22 17:18:25 2024 00:14:06.317 read: IOPS=73, BW=9394KiB/s (9620kB/s)(75.8MiB/8257msec) 00:14:06.317 slat (usec): min=6, max=964, avg=63.70, stdev=134.03 00:14:06.317 clat (usec): min=6593, max=68139, avg=20504.43, stdev=10456.97 00:14:06.317 lat (usec): min=6663, max=68155, avg=20568.13, stdev=10449.39 00:14:06.317 clat percentiles (usec): 00:14:06.317 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 9896], 20.00th=[12780], 00:14:06.317 | 30.00th=[14615], 40.00th=[16581], 50.00th=[18220], 60.00th=[20055], 00:14:06.317 | 70.00th=[23725], 80.00th=[26346], 90.00th=[33817], 95.00th=[41157], 00:14:06.317 | 99.00th=[62653], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:14:06.317 | 99.99th=[67634] 00:14:06.317 write: IOPS=75, BW=9707KiB/s (9940kB/s)(80.0MiB/8439msec); 0 zone resets 00:14:06.317 slat (usec): min=38, max=1892, avg=143.44, stdev=193.90 00:14:06.317 clat (msec): min=64, max=296, avg=104.58, stdev=39.14 00:14:06.317 lat (msec): min=64, max=296, avg=104.72, stdev=39.15 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 72], 00:14:06.317 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 103], 00:14:06.317 | 70.00th=[ 111], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 182], 00:14:06.317 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 296], 00:14:06.317 | 99.99th=[ 296] 00:14:06.317 bw ( KiB/s): min= 1021, max=13056, per=0.85%, avg=8392.16, stdev=3843.24, samples=19 00:14:06.317 iops : min= 7, max= 102, avg=65.37, stdev=30.27, samples=19 00:14:06.317 lat (msec) : 10=4.98%, 20=24.40%, 50=18.14%, 100=30.26%, 250=21.67% 00:14:06.317 lat (msec) : 500=0.56% 00:14:06.317 cpu : usr=0.43%, sys=0.26%, ctx=2077, majf=0, minf=3 00:14:06.317 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 issued rwts: total=606,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.317 job84: (groupid=0, jobs=1): err= 0: pid=71622: Mon Jul 22 17:18:25 2024 00:14:06.317 read: IOPS=72, BW=9299KiB/s (9522kB/s)(80.0MiB/8810msec) 00:14:06.317 slat (usec): min=6, max=1808, avg=45.97, stdev=107.61 00:14:06.317 clat (msec): min=4, max=127, avg=16.46, stdev=14.96 00:14:06.317 lat (msec): min=4, max=127, avg=16.51, stdev=14.96 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:14:06.317 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:14:06.317 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 39], 00:14:06.317 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 128], 99.95th=[ 128], 00:14:06.317 | 99.99th=[ 128] 00:14:06.317 write: IOPS=75, BW=9704KiB/s (9937kB/s)(83.0MiB/8758msec); 0 zone resets 00:14:06.317 slat (usec): min=36, max=7018, avg=142.69, stdev=323.14 00:14:06.317 clat (usec): min=1336, max=252344, avg=104517.65, stdev=47535.90 00:14:06.317 lat (usec): min=1879, max=252427, avg=104660.33, stdev=47542.76 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 69], 20.00th=[ 72], 00:14:06.317 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 111], 00:14:06.317 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 167], 95.00th=[ 188], 00:14:06.317 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 253], 00:14:06.317 | 99.99th=[ 253] 00:14:06.317 bw ( KiB/s): min= 1024, max=24064, per=0.86%, avg=8406.90, stdev=5100.68, samples=20 00:14:06.317 iops : min= 8, max= 188, avg=65.50, stdev=39.87, samples=20 00:14:06.317 lat (msec) : 2=0.15%, 4=0.77%, 10=13.73%, 20=30.21%, 50=6.13% 00:14:06.317 lat (msec) : 100=25.15%, 250=23.77%, 500=0.08% 00:14:06.317 cpu : usr=0.53%, sys=0.26%, ctx=2048, majf=0, minf=5 00:14:06.317 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 issued rwts: total=640,664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.317 job85: (groupid=0, jobs=1): err= 0: pid=71624: Mon Jul 22 17:18:25 2024 00:14:06.317 read: IOPS=57, BW=7416KiB/s (7594kB/s)(60.0MiB/8285msec) 00:14:06.317 slat (usec): min=6, max=1881, avg=64.60, stdev=139.11 00:14:06.317 clat (msec): min=6, max=155, avg=22.05, stdev=19.39 00:14:06.317 lat (msec): min=7, max=156, avg=22.11, stdev=19.38 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:14:06.317 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 20], 00:14:06.317 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 44], 00:14:06.317 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:14:06.317 | 99.99th=[ 157] 00:14:06.317 write: IOPS=73, BW=9399KiB/s (9624kB/s)(80.0MiB/8716msec); 0 zone resets 00:14:06.317 slat (usec): min=30, max=2359, avg=140.34, stdev=215.82 00:14:06.317 clat (msec): min=47, max=494, avg=108.05, stdev=57.59 00:14:06.317 lat (msec): min=48, max=494, avg=108.19, stdev=57.58 00:14:06.317 clat percentiles (msec): 00:14:06.317 | 1.00th=[ 56], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 71], 00:14:06.317 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 90], 60.00th=[ 103], 00:14:06.317 | 70.00th=[ 115], 80.00th=[ 132], 90.00th=[ 157], 95.00th=[ 199], 00:14:06.317 | 99.00th=[ 393], 99.50th=[ 430], 99.90th=[ 493], 99.95th=[ 493], 00:14:06.317 | 99.99th=[ 493] 00:14:06.317 bw ( KiB/s): min= 768, max=12825, per=0.87%, avg=8518.84, stdev=3968.70, samples=19 00:14:06.317 iops : min= 6, max= 100, avg=66.32, stdev=31.01, samples=19 00:14:06.317 lat (msec) : 10=4.82%, 20=22.23%, 50=14.46%, 100=34.11%, 250=22.50% 00:14:06.317 lat (msec) : 500=1.88% 00:14:06.317 cpu : usr=0.40%, sys=0.23%, ctx=1895, majf=0, minf=7 00:14:06.317 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.317 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.317 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.317 job86: (groupid=0, jobs=1): err= 0: pid=71625: Mon Jul 22 17:18:25 2024 00:14:06.317 read: IOPS=65, BW=8321KiB/s (8520kB/s)(68.1MiB/8384msec) 00:14:06.317 slat (usec): min=7, max=1165, avg=62.75, stdev=121.79 00:14:06.317 clat (usec): min=6487, max=67041, avg=19822.25, stdev=10231.66 00:14:06.317 lat (usec): min=6974, max=67054, avg=19885.01, stdev=10237.58 00:14:06.317 clat percentiles (usec): 00:14:06.317 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[11994], 00:14:06.318 | 30.00th=[13566], 40.00th=[15533], 50.00th=[17433], 60.00th=[19530], 00:14:06.318 | 70.00th=[22152], 80.00th=[25560], 90.00th=[32113], 95.00th=[40109], 00:14:06.318 | 99.00th=[63177], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:14:06.318 | 99.99th=[66847] 00:14:06.318 write: IOPS=74, BW=9481KiB/s (9709kB/s)(80.0MiB/8640msec); 0 zone resets 00:14:06.318 slat (usec): min=30, max=4065, avg=128.84, stdev=247.40 00:14:06.318 clat (msec): min=25, max=373, avg=107.15, stdev=45.54 00:14:06.318 lat (msec): min=25, max=373, avg=107.28, stdev=45.54 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 32], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:14:06.318 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 106], 00:14:06.318 | 70.00th=[ 116], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 186], 00:14:06.318 | 99.00th=[ 300], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:14:06.318 | 99.99th=[ 376] 00:14:06.318 bw ( KiB/s): min= 2299, max=14080, per=0.89%, avg=8728.89, stdev=3771.44, samples=18 00:14:06.318 iops : min= 17, max= 110, avg=67.89, stdev=29.62, samples=18 00:14:06.318 lat (msec) : 10=4.81%, 20=23.88%, 50=16.96%, 100=30.63%, 250=22.45% 00:14:06.318 lat (msec) : 500=1.27% 00:14:06.318 cpu : usr=0.42%, sys=0.25%, ctx=1919, majf=0, minf=5 00:14:06.318 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 issued rwts: total=545,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.318 job87: (groupid=0, jobs=1): err= 0: pid=71626: Mon Jul 22 17:18:25 2024 00:14:06.318 read: IOPS=62, BW=7962KiB/s (8153kB/s)(60.0MiB/7717msec) 00:14:06.318 slat (usec): min=7, max=1781, avg=58.28, stdev=124.66 00:14:06.318 clat (msec): min=7, max=643, avg=31.97, stdev=67.23 00:14:06.318 lat (msec): min=7, max=643, avg=32.03, stdev=67.23 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:14:06.318 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 18], 00:14:06.318 | 70.00th=[ 20], 80.00th=[ 26], 90.00th=[ 44], 95.00th=[ 105], 00:14:06.318 | 99.00th=[ 456], 99.50th=[ 472], 99.90th=[ 642], 99.95th=[ 642], 00:14:06.318 | 99.99th=[ 642] 00:14:06.318 write: IOPS=65, BW=8443KiB/s (8646kB/s)(67.0MiB/8126msec); 0 zone resets 00:14:06.318 slat (usec): min=36, max=4077, avg=148.62, stdev=274.28 00:14:06.318 clat (msec): min=60, max=390, avg=120.19, stdev=50.94 00:14:06.318 lat (msec): min=60, max=390, avg=120.34, stdev=50.93 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 79], 00:14:06.318 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 109], 60.00th=[ 124], 00:14:06.318 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 174], 95.00th=[ 215], 00:14:06.318 | 99.00th=[ 330], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:14:06.318 | 99.99th=[ 393] 00:14:06.318 bw ( KiB/s): min= 1536, max=12544, per=0.77%, avg=7516.94, stdev=3321.64, samples=18 00:14:06.318 iops : min= 12, max= 98, avg=58.50, stdev=26.02, samples=18 00:14:06.318 lat (msec) : 10=7.68%, 20=25.69%, 50=9.94%, 100=23.72%, 250=30.71% 00:14:06.318 lat (msec) : 500=2.07%, 750=0.20% 00:14:06.318 cpu : usr=0.39%, sys=0.19%, ctx=1775, majf=0, minf=5 00:14:06.318 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 issued rwts: total=480,536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.318 job88: (groupid=0, jobs=1): err= 0: pid=71627: Mon Jul 22 17:18:25 2024 00:14:06.318 read: IOPS=66, BW=8567KiB/s (8772kB/s)(60.0MiB/7172msec) 00:14:06.318 slat (usec): min=7, max=1826, avg=80.31, stdev=150.00 00:14:06.318 clat (msec): min=4, max=220, avg=22.76, stdev=31.18 00:14:06.318 lat (msec): min=4, max=220, avg=22.84, stdev=31.17 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:14:06.318 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:14:06.318 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 28], 95.00th=[ 108], 00:14:06.318 | 99.00th=[ 169], 99.50th=[ 220], 99.90th=[ 220], 99.95th=[ 220], 00:14:06.318 | 99.99th=[ 220] 00:14:06.318 write: IOPS=61, BW=7817KiB/s (8005kB/s)(66.1MiB/8662msec); 0 zone resets 00:14:06.318 slat (usec): min=37, max=3942, avg=146.67, stdev=235.29 00:14:06.318 clat (msec): min=66, max=481, avg=130.29, stdev=48.42 00:14:06.318 lat (msec): min=66, max=481, avg=130.44, stdev=48.42 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 83], 20.00th=[ 94], 00:14:06.318 | 30.00th=[ 105], 40.00th=[ 116], 50.00th=[ 127], 60.00th=[ 136], 00:14:06.318 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 207], 00:14:06.318 | 99.00th=[ 347], 99.50th=[ 414], 99.90th=[ 481], 99.95th=[ 481], 00:14:06.318 | 99.99th=[ 481] 00:14:06.318 bw ( KiB/s): min= 2308, max=11776, per=0.72%, avg=7028.63, stdev=2544.77, samples=19 00:14:06.318 iops : min= 18, max= 92, avg=54.74, stdev=19.91, samples=19 00:14:06.318 lat (msec) : 10=10.01%, 20=25.67%, 50=9.22%, 100=13.68%, 250=40.24% 00:14:06.318 lat (msec) : 500=1.19% 00:14:06.318 cpu : usr=0.43%, sys=0.15%, ctx=1772, majf=0, minf=1 00:14:06.318 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 issued rwts: total=480,529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.318 job89: (groupid=0, jobs=1): err= 0: pid=71628: Mon Jul 22 17:18:25 2024 00:14:06.318 read: IOPS=77, BW=9891KiB/s (10.1MB/s)(80.0MiB/8282msec) 00:14:06.318 slat (usec): min=6, max=1747, avg=66.26, stdev=135.80 00:14:06.318 clat (usec): min=7644, max=46898, avg=19786.53, stdev=7616.35 00:14:06.318 lat (usec): min=7659, max=46914, avg=19852.79, stdev=7613.51 00:14:06.318 clat percentiles (usec): 00:14:06.318 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[11600], 20.00th=[12780], 00:14:06.318 | 30.00th=[14484], 40.00th=[16581], 50.00th=[18482], 60.00th=[21365], 00:14:06.318 | 70.00th=[23200], 80.00th=[25035], 90.00th=[28705], 95.00th=[33424], 00:14:06.318 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:14:06.318 | 99.99th=[46924] 00:14:06.318 write: IOPS=77, BW=9921KiB/s (10.2MB/s)(82.0MiB/8464msec); 0 zone resets 00:14:06.318 slat (usec): min=37, max=3541, avg=150.65, stdev=245.03 00:14:06.318 clat (msec): min=22, max=247, avg=102.34, stdev=35.29 00:14:06.318 lat (msec): min=22, max=247, avg=102.49, stdev=35.33 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 39], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:14:06.318 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 103], 00:14:06.318 | 70.00th=[ 115], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 169], 00:14:06.318 | 99.00th=[ 224], 99.50th=[ 226], 99.90th=[ 247], 99.95th=[ 247], 00:14:06.318 | 99.99th=[ 247] 00:14:06.318 bw ( KiB/s): min= 255, max=13595, per=0.84%, avg=8285.05, stdev=3996.60, samples=20 00:14:06.318 iops : min= 1, max= 106, avg=64.35, stdev=31.33, samples=20 00:14:06.318 lat (msec) : 10=1.47%, 20=26.08%, 50=22.45%, 100=28.70%, 250=21.30% 00:14:06.318 cpu : usr=0.51%, sys=0.22%, ctx=2178, majf=0, minf=7 00:14:06.318 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 issued rwts: total=640,656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.318 job90: (groupid=0, jobs=1): err= 0: pid=71629: Mon Jul 22 17:18:25 2024 00:14:06.318 read: IOPS=62, BW=7979KiB/s (8171kB/s)(60.0MiB/7700msec) 00:14:06.318 slat (usec): min=6, max=1818, avg=67.34, stdev=149.34 00:14:06.318 clat (usec): min=8628, max=56306, avg=20293.13, stdev=8738.10 00:14:06.318 lat (usec): min=8694, max=56319, avg=20360.47, stdev=8734.29 00:14:06.318 clat percentiles (usec): 00:14:06.318 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[12518], 20.00th=[14222], 00:14:06.318 | 30.00th=[15008], 40.00th=[16319], 50.00th=[17957], 60.00th=[18744], 00:14:06.318 | 70.00th=[21627], 80.00th=[25560], 90.00th=[32375], 95.00th=[40109], 00:14:06.318 | 99.00th=[51643], 99.50th=[52691], 99.90th=[56361], 99.95th=[56361], 00:14:06.318 | 99.99th=[56361] 00:14:06.318 write: IOPS=71, BW=9175KiB/s (9395kB/s)(79.0MiB/8817msec); 0 zone resets 00:14:06.318 slat (usec): min=38, max=4441, avg=126.71, stdev=218.89 00:14:06.318 clat (msec): min=39, max=444, avg=110.76, stdev=52.14 00:14:06.318 lat (msec): min=39, max=444, avg=110.89, stdev=52.15 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 46], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:14:06.318 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 101], 00:14:06.318 | 70.00th=[ 123], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 190], 00:14:06.318 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 443], 99.95th=[ 443], 00:14:06.318 | 99.99th=[ 443] 00:14:06.318 bw ( KiB/s): min= 1021, max=13540, per=0.81%, avg=7988.90, stdev=3901.98, samples=20 00:14:06.318 iops : min= 7, max= 105, avg=62.10, stdev=30.54, samples=20 00:14:06.318 lat (msec) : 10=1.35%, 20=26.53%, 50=15.47%, 100=33.90%, 250=21.49% 00:14:06.318 lat (msec) : 500=1.26% 00:14:06.318 cpu : usr=0.48%, sys=0.17%, ctx=1811, majf=0, minf=3 00:14:06.318 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.318 issued rwts: total=480,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.318 job91: (groupid=0, jobs=1): err= 0: pid=71630: Mon Jul 22 17:18:25 2024 00:14:06.318 read: IOPS=60, BW=7716KiB/s (7901kB/s)(60.0MiB/7963msec) 00:14:06.318 slat (usec): min=5, max=4266, avg=72.03, stdev=268.87 00:14:06.318 clat (msec): min=6, max=182, avg=24.90, stdev=21.75 00:14:06.318 lat (msec): min=7, max=182, avg=24.98, stdev=21.74 00:14:06.318 clat percentiles (msec): 00:14:06.318 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 15], 00:14:06.318 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 21], 60.00th=[ 23], 00:14:06.318 | 70.00th=[ 26], 80.00th=[ 29], 90.00th=[ 36], 95.00th=[ 47], 00:14:06.319 | 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 184], 99.95th=[ 184], 00:14:06.319 | 99.99th=[ 184] 00:14:06.319 write: IOPS=73, BW=9454KiB/s (9681kB/s)(78.6MiB/8516msec); 0 zone resets 00:14:06.319 slat (usec): min=30, max=1523, avg=136.75, stdev=172.76 00:14:06.319 clat (msec): min=47, max=334, avg=107.36, stdev=43.12 00:14:06.319 lat (msec): min=47, max=334, avg=107.49, stdev=43.12 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:14:06.319 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 97], 00:14:06.319 | 70.00th=[ 108], 80.00th=[ 134], 90.00th=[ 171], 95.00th=[ 197], 00:14:06.319 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:14:06.319 | 99.99th=[ 334] 00:14:06.319 bw ( KiB/s): min= 1792, max=13312, per=0.81%, avg=7952.30, stdev=4018.04, samples=20 00:14:06.319 iops : min= 14, max= 104, avg=61.95, stdev=31.31, samples=20 00:14:06.319 lat (msec) : 10=1.44%, 20=18.94%, 50=21.64%, 100=37.33%, 250=19.84% 00:14:06.319 lat (msec) : 500=0.81% 00:14:06.319 cpu : usr=0.48%, sys=0.20%, ctx=1777, majf=0, minf=3 00:14:06.319 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 issued rwts: total=480,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.319 job92: (groupid=0, jobs=1): err= 0: pid=71631: Mon Jul 22 17:18:25 2024 00:14:06.319 read: IOPS=57, BW=7391KiB/s (7568kB/s)(60.0MiB/8313msec) 00:14:06.319 slat (usec): min=7, max=1765, avg=77.55, stdev=161.19 00:14:06.319 clat (usec): min=10415, max=65397, avg=23659.99, stdev=8588.16 00:14:06.319 lat (usec): min=10530, max=65457, avg=23737.54, stdev=8582.78 00:14:06.319 clat percentiles (usec): 00:14:06.319 | 1.00th=[10683], 5.00th=[12518], 10.00th=[13829], 20.00th=[16909], 00:14:06.319 | 30.00th=[18744], 40.00th=[20841], 50.00th=[22938], 60.00th=[24511], 00:14:06.319 | 70.00th=[25822], 80.00th=[28443], 90.00th=[34341], 95.00th=[38011], 00:14:06.319 | 99.00th=[54789], 99.50th=[57410], 99.90th=[65274], 99.95th=[65274], 00:14:06.319 | 99.99th=[65274] 00:14:06.319 write: IOPS=67, BW=8610KiB/s (8816kB/s)(72.5MiB/8623msec); 0 zone resets 00:14:06.319 slat (usec): min=37, max=1986, avg=127.84, stdev=167.47 00:14:06.319 clat (msec): min=14, max=678, avg=117.91, stdev=74.02 00:14:06.319 lat (msec): min=14, max=678, avg=118.03, stdev=74.03 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 26], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 79], 00:14:06.319 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 101], 00:14:06.319 | 70.00th=[ 116], 80.00th=[ 146], 90.00th=[ 186], 95.00th=[ 247], 00:14:06.319 | 99.00th=[ 489], 99.50th=[ 634], 99.90th=[ 676], 99.95th=[ 676], 00:14:06.319 | 99.99th=[ 676] 00:14:06.319 bw ( KiB/s): min= 256, max=13312, per=0.83%, avg=8133.00, stdev=3726.10, samples=18 00:14:06.319 iops : min= 2, max= 104, avg=63.44, stdev=29.08, samples=18 00:14:06.319 lat (msec) : 20=16.32%, 50=29.06%, 100=32.74%, 250=19.15%, 500=2.26% 00:14:06.319 lat (msec) : 750=0.47% 00:14:06.319 cpu : usr=0.42%, sys=0.18%, ctx=1825, majf=0, minf=7 00:14:06.319 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 issued rwts: total=480,580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.319 job93: (groupid=0, jobs=1): err= 0: pid=71632: Mon Jul 22 17:18:25 2024 00:14:06.319 read: IOPS=58, BW=7481KiB/s (7660kB/s)(60.0MiB/8213msec) 00:14:06.319 slat (usec): min=5, max=793, avg=53.55, stdev=89.21 00:14:06.319 clat (msec): min=12, max=106, avg=24.61, stdev=13.39 00:14:06.319 lat (msec): min=12, max=106, avg=24.66, stdev=13.38 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 17], 00:14:06.319 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 24], 00:14:06.319 | 70.00th=[ 26], 80.00th=[ 29], 90.00th=[ 36], 95.00th=[ 50], 00:14:06.319 | 99.00th=[ 90], 99.50th=[ 94], 99.90th=[ 107], 99.95th=[ 107], 00:14:06.319 | 99.99th=[ 107] 00:14:06.319 write: IOPS=74, BW=9597KiB/s (9827kB/s)(80.0MiB/8536msec); 0 zone resets 00:14:06.319 slat (usec): min=37, max=3971, avg=141.77, stdev=224.87 00:14:06.319 clat (msec): min=61, max=344, avg=105.63, stdev=45.80 00:14:06.319 lat (msec): min=61, max=344, avg=105.78, stdev=45.81 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:14:06.319 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 97], 00:14:06.319 | 70.00th=[ 111], 80.00th=[ 134], 90.00th=[ 165], 95.00th=[ 186], 00:14:06.319 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 347], 00:14:06.319 | 99.99th=[ 347] 00:14:06.319 bw ( KiB/s): min= 768, max=13056, per=0.87%, avg=8522.53, stdev=3813.73, samples=19 00:14:06.319 iops : min= 6, max= 102, avg=66.42, stdev=29.93, samples=19 00:14:06.319 lat (msec) : 20=17.41%, 50=23.39%, 100=38.21%, 250=19.46%, 500=1.52% 00:14:06.319 cpu : usr=0.46%, sys=0.23%, ctx=1857, majf=0, minf=7 00:14:06.319 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.319 job94: (groupid=0, jobs=1): err= 0: pid=71633: Mon Jul 22 17:18:25 2024 00:14:06.319 read: IOPS=66, BW=8482KiB/s (8685kB/s)(60.0MiB/7244msec) 00:14:06.319 slat (usec): min=7, max=584, avg=53.44, stdev=81.97 00:14:06.319 clat (msec): min=3, max=237, avg=27.38, stdev=40.76 00:14:06.319 lat (msec): min=4, max=237, avg=27.43, stdev=40.76 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 11], 00:14:06.319 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 19], 00:14:06.319 | 70.00th=[ 21], 80.00th=[ 25], 90.00th=[ 33], 95.00th=[ 125], 00:14:06.319 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 239], 99.95th=[ 239], 00:14:06.319 | 99.99th=[ 239] 00:14:06.319 write: IOPS=60, BW=7680KiB/s (7864kB/s)(63.0MiB/8400msec); 0 zone resets 00:14:06.319 slat (usec): min=38, max=2605, avg=153.76, stdev=232.52 00:14:06.319 clat (msec): min=54, max=340, avg=132.58, stdev=48.28 00:14:06.319 lat (msec): min=54, max=340, avg=132.74, stdev=48.30 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 84], 00:14:06.319 | 30.00th=[ 94], 40.00th=[ 109], 50.00th=[ 128], 60.00th=[ 144], 00:14:06.319 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 215], 00:14:06.319 | 99.00th=[ 268], 99.50th=[ 321], 99.90th=[ 342], 99.95th=[ 342], 00:14:06.319 | 99.99th=[ 342] 00:14:06.319 bw ( KiB/s): min= 1280, max=11264, per=0.68%, avg=6692.47, stdev=2682.79, samples=19 00:14:06.319 iops : min= 10, max= 88, avg=52.05, stdev=21.14, samples=19 00:14:06.319 lat (msec) : 4=0.10%, 10=8.84%, 20=24.70%, 50=11.08%, 100=18.09% 00:14:06.319 lat (msec) : 250=36.48%, 500=0.71% 00:14:06.319 cpu : usr=0.42%, sys=0.13%, ctx=1711, majf=0, minf=5 00:14:06.319 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.319 issued rwts: total=480,504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.319 job95: (groupid=0, jobs=1): err= 0: pid=71634: Mon Jul 22 17:18:25 2024 00:14:06.319 read: IOPS=57, BW=7413KiB/s (7591kB/s)(60.0MiB/8288msec) 00:14:06.319 slat (usec): min=6, max=1642, avg=65.50, stdev=130.36 00:14:06.319 clat (msec): min=11, max=119, avg=24.75, stdev=13.82 00:14:06.319 lat (msec): min=11, max=119, avg=24.81, stdev=13.84 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 16], 00:14:06.319 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 25], 00:14:06.319 | 70.00th=[ 26], 80.00th=[ 30], 90.00th=[ 37], 95.00th=[ 43], 00:14:06.319 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 120], 00:14:06.319 | 99.99th=[ 120] 00:14:06.319 write: IOPS=73, BW=9436KiB/s (9662kB/s)(78.9MiB/8560msec); 0 zone resets 00:14:06.319 slat (usec): min=36, max=2655, avg=150.97, stdev=231.80 00:14:06.319 clat (msec): min=31, max=346, avg=107.54, stdev=49.02 00:14:06.319 lat (msec): min=31, max=346, avg=107.69, stdev=49.01 00:14:06.319 clat percentiles (msec): 00:14:06.319 | 1.00th=[ 37], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:14:06.319 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 100], 00:14:06.320 | 70.00th=[ 107], 80.00th=[ 127], 90.00th=[ 163], 95.00th=[ 220], 00:14:06.320 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:14:06.320 | 99.99th=[ 347] 00:14:06.320 bw ( KiB/s): min= 512, max=13056, per=0.85%, avg=8389.47, stdev=3878.21, samples=19 00:14:06.320 iops : min= 4, max= 102, avg=65.47, stdev=30.37, samples=19 00:14:06.320 lat (msec) : 20=16.74%, 50=26.37%, 100=34.83%, 250=20.34%, 500=1.71% 00:14:06.320 cpu : usr=0.41%, sys=0.25%, ctx=1871, majf=0, minf=5 00:14:06.320 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 issued rwts: total=480,631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.320 job96: (groupid=0, jobs=1): err= 0: pid=71635: Mon Jul 22 17:18:25 2024 00:14:06.320 read: IOPS=67, BW=8615KiB/s (8821kB/s)(60.0MiB/7132msec) 00:14:06.320 slat (usec): min=7, max=2278, avg=84.10, stdev=239.79 00:14:06.320 clat (msec): min=5, max=406, avg=33.35, stdev=56.25 00:14:06.320 lat (msec): min=6, max=406, avg=33.43, stdev=56.25 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:14:06.320 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 19], 00:14:06.320 | 70.00th=[ 22], 80.00th=[ 27], 90.00th=[ 70], 95.00th=[ 134], 00:14:06.320 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 405], 00:14:06.320 | 99.99th=[ 405] 00:14:06.320 write: IOPS=61, BW=7825KiB/s (8012kB/s)(61.4MiB/8032msec); 0 zone resets 00:14:06.320 slat (usec): min=36, max=1554, avg=151.57, stdev=195.24 00:14:06.320 clat (msec): min=66, max=333, avg=130.07, stdev=47.75 00:14:06.320 lat (msec): min=66, max=333, avg=130.22, stdev=47.78 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 84], 00:14:06.320 | 30.00th=[ 93], 40.00th=[ 108], 50.00th=[ 126], 60.00th=[ 140], 00:14:06.320 | 70.00th=[ 153], 80.00th=[ 174], 90.00th=[ 197], 95.00th=[ 218], 00:14:06.320 | 99.00th=[ 262], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:14:06.320 | 99.99th=[ 334] 00:14:06.320 bw ( KiB/s): min= 2793, max=12774, per=0.70%, avg=6879.28, stdev=2831.56, samples=18 00:14:06.320 iops : min= 21, max= 99, avg=53.50, stdev=22.24, samples=18 00:14:06.320 lat (msec) : 10=3.60%, 20=28.63%, 50=10.81%, 100=19.57%, 250=36.05% 00:14:06.320 lat (msec) : 500=1.34% 00:14:06.320 cpu : usr=0.46%, sys=0.14%, ctx=1644, majf=0, minf=9 00:14:06.320 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 issued rwts: total=480,491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.320 job97: (groupid=0, jobs=1): err= 0: pid=71636: Mon Jul 22 17:18:25 2024 00:14:06.320 read: IOPS=65, BW=8386KiB/s (8587kB/s)(68.8MiB/8395msec) 00:14:06.320 slat (usec): min=6, max=1898, avg=74.18, stdev=151.56 00:14:06.320 clat (usec): min=8226, max=51606, avg=18215.77, stdev=7021.45 00:14:06.320 lat (usec): min=8247, max=51631, avg=18289.95, stdev=7029.36 00:14:06.320 clat percentiles (usec): 00:14:06.320 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11338], 20.00th=[12518], 00:14:06.320 | 30.00th=[13698], 40.00th=[15270], 50.00th=[16319], 60.00th=[17957], 00:14:06.320 | 70.00th=[19530], 80.00th=[22676], 90.00th=[28705], 95.00th=[31851], 00:14:06.320 | 99.00th=[39060], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:14:06.320 | 99.99th=[51643] 00:14:06.320 write: IOPS=73, BW=9374KiB/s (9599kB/s)(80.0MiB/8739msec); 0 zone resets 00:14:06.320 slat (usec): min=39, max=2611, avg=141.95, stdev=205.81 00:14:06.320 clat (msec): min=11, max=428, avg=108.43, stdev=49.08 00:14:06.320 lat (msec): min=11, max=428, avg=108.57, stdev=49.09 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 29], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:14:06.320 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 97], 00:14:06.320 | 70.00th=[ 112], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 201], 00:14:06.320 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 430], 99.95th=[ 430], 00:14:06.320 | 99.99th=[ 430] 00:14:06.320 bw ( KiB/s): min= 1024, max=14080, per=0.85%, avg=8366.00, stdev=3959.66, samples=19 00:14:06.320 iops : min= 8, max= 110, avg=65.32, stdev=30.90, samples=19 00:14:06.320 lat (msec) : 10=0.50%, 20=33.61%, 50=12.69%, 100=33.28%, 250=18.82% 00:14:06.320 lat (msec) : 500=1.09% 00:14:06.320 cpu : usr=0.46%, sys=0.24%, ctx=2024, majf=0, minf=5 00:14:06.320 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 issued rwts: total=550,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.320 job98: (groupid=0, jobs=1): err= 0: pid=71637: Mon Jul 22 17:18:25 2024 00:14:06.320 read: IOPS=70, BW=9025KiB/s (9242kB/s)(74.0MiB/8396msec) 00:14:06.320 slat (usec): min=6, max=1696, avg=59.03, stdev=125.21 00:14:06.320 clat (msec): min=8, max=207, avg=20.61, stdev=21.10 00:14:06.320 lat (msec): min=8, max=207, avg=20.67, stdev=21.10 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:14:06.320 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:14:06.320 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 31], 95.00th=[ 36], 00:14:06.320 | 99.00th=[ 184], 99.50th=[ 197], 99.90th=[ 207], 99.95th=[ 207], 00:14:06.320 | 99.99th=[ 207] 00:14:06.320 write: IOPS=75, BW=9676KiB/s (9909kB/s)(80.0MiB/8466msec); 0 zone resets 00:14:06.320 slat (usec): min=38, max=1768, avg=146.42, stdev=210.31 00:14:06.320 clat (msec): min=50, max=427, avg=104.82, stdev=47.83 00:14:06.320 lat (msec): min=50, max=427, avg=104.97, stdev=47.85 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 58], 5.00th=[ 72], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.320 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 96], 00:14:06.320 | 70.00th=[ 106], 80.00th=[ 126], 90.00th=[ 163], 95.00th=[ 203], 00:14:06.320 | 99.00th=[ 296], 99.50th=[ 338], 99.90th=[ 426], 99.95th=[ 426], 00:14:06.320 | 99.99th=[ 426] 00:14:06.320 bw ( KiB/s): min= 1792, max=13568, per=0.90%, avg=8872.50, stdev=3845.03, samples=18 00:14:06.320 iops : min= 14, max= 106, avg=69.17, stdev=30.15, samples=18 00:14:06.320 lat (msec) : 10=1.38%, 20=31.33%, 50=14.45%, 100=34.42%, 250=17.37% 00:14:06.320 lat (msec) : 500=1.06% 00:14:06.320 cpu : usr=0.47%, sys=0.24%, ctx=2003, majf=0, minf=5 00:14:06.320 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 issued rwts: total=592,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.320 job99: (groupid=0, jobs=1): err= 0: pid=71638: Mon Jul 22 17:18:25 2024 00:14:06.320 read: IOPS=73, BW=9412KiB/s (9637kB/s)(79.8MiB/8677msec) 00:14:06.320 slat (usec): min=7, max=1546, avg=69.55, stdev=148.07 00:14:06.320 clat (msec): min=5, max=232, avg=22.40, stdev=27.12 00:14:06.320 lat (msec): min=5, max=232, avg=22.47, stdev=27.12 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:14:06.320 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:14:06.320 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 37], 95.00th=[ 65], 00:14:06.320 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 234], 99.95th=[ 234], 00:14:06.320 | 99.99th=[ 234] 00:14:06.320 write: IOPS=77, BW=9950KiB/s (10.2MB/s)(80.0MiB/8233msec); 0 zone resets 00:14:06.320 slat (usec): min=38, max=4831, avg=150.20, stdev=253.03 00:14:06.320 clat (msec): min=2, max=400, avg=102.04, stdev=52.66 00:14:06.320 lat (msec): min=2, max=400, avg=102.20, stdev=52.65 00:14:06.320 clat percentiles (msec): 00:14:06.320 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 72], 20.00th=[ 73], 00:14:06.320 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 91], 00:14:06.320 | 70.00th=[ 102], 80.00th=[ 130], 90.00th=[ 174], 95.00th=[ 203], 00:14:06.320 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 401], 99.95th=[ 401], 00:14:06.320 | 99.99th=[ 401] 00:14:06.320 bw ( KiB/s): min= 768, max=19417, per=0.88%, avg=8620.68, stdev=4592.62, samples=19 00:14:06.320 iops : min= 6, max= 151, avg=67.21, stdev=35.78, samples=19 00:14:06.320 lat (msec) : 4=0.16%, 10=3.60%, 20=32.94%, 50=12.68%, 100=33.96% 00:14:06.320 lat (msec) : 250=15.57%, 500=1.10% 00:14:06.320 cpu : usr=0.49%, sys=0.27%, ctx=2074, majf=0, minf=5 00:14:06.320 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.320 issued rwts: total=638,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.320 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:06.320 00:14:06.320 Run status group 0 (all jobs): 00:14:06.320 READ: bw=868MiB/s (910MB/s), 6964KiB/s-13.5MiB/s (7131kB/s-14.1MB/s), io=7972MiB (8359MB), run=6920-9185msec 00:14:06.320 WRITE: bw=959MiB/s (1005MB/s), 7490KiB/s-14.4MiB/s (7669kB/s-15.1MB/s), io=9077MiB (9517MB), run=7629-9467msec 00:14:06.320 00:14:06.320 Disk stats (read/write): 00:14:06.320 sdc: ios=537/640, merge=0/0, ticks=15332/60813, in_queue=76145, util=71.57% 00:14:06.320 sdd: ios=586/640, merge=0/0, ticks=12892/62816, in_queue=75709, util=71.60% 00:14:06.320 sdg: ios=676/655, merge=0/0, ticks=8731/68252, in_queue=76984, util=72.39% 00:14:06.320 sdi: ios=676/668, merge=0/0, ticks=7601/69281, in_queue=76883, util=71.90% 00:14:06.320 sdl: ios=514/550, merge=0/0, ticks=11147/64309, in_queue=75457, util=72.69% 00:14:06.320 sdp: ios=642/654, merge=0/0, ticks=6898/69517, in_queue=76416, util=73.19% 00:14:06.320 sdx: ios=481/596, merge=0/0, ticks=11828/64512, in_queue=76340, util=73.51% 00:14:06.320 sdaa: ios=480/630, merge=0/0, ticks=7719/68401, in_queue=76121, util=74.00% 00:14:06.320 sdae: ios=480/623, merge=0/0, ticks=6915/69382, in_queue=76298, util=74.20% 00:14:06.320 sdaj: ios=480/498, merge=0/0, ticks=12105/64593, in_queue=76698, util=74.61% 00:14:06.320 sdf: ios=844/925, merge=0/0, ticks=11196/66297, in_queue=77494, util=74.91% 00:14:06.320 sdh: ios=640/761, merge=0/0, ticks=10788/66444, in_queue=77233, util=74.98% 00:14:06.321 sdk: ios=840/904, merge=0/0, ticks=11489/65623, in_queue=77113, util=74.93% 00:14:06.321 sdm: ios=836/914, merge=0/0, ticks=9424/67312, in_queue=76736, util=74.77% 00:14:06.321 sdq: ios=802/885, merge=0/0, ticks=9911/65859, in_queue=75771, util=75.18% 00:14:06.321 sdt: ios=801/802, merge=0/0, ticks=8234/67635, in_queue=75869, util=75.69% 00:14:06.321 sdv: ios=802/904, merge=0/0, ticks=10575/65193, in_queue=75768, util=75.84% 00:14:06.321 sdz: ios=802/818, merge=0/0, ticks=12621/63199, in_queue=75820, util=76.19% 00:14:06.321 sdad: ios=801/800, merge=0/0, ticks=10433/66365, in_queue=76798, util=76.56% 00:14:06.321 sdah: ios=802/888, merge=0/0, ticks=10826/65107, in_queue=75934, util=76.51% 00:14:06.321 sdn: ios=840/960, merge=0/0, ticks=12792/64491, in_queue=77283, util=77.61% 00:14:06.321 sds: ios=802/914, merge=0/0, ticks=13036/62487, in_queue=75523, util=77.24% 00:14:06.321 sdw: ios=686/800, merge=0/0, ticks=11920/64549, in_queue=76469, util=77.37% 00:14:06.321 sdab: ios=802/928, merge=0/0, ticks=12846/63185, in_queue=76032, util=77.32% 00:14:06.321 sdaf: ios=802/922, merge=0/0, ticks=13054/62740, in_queue=75794, util=77.85% 00:14:06.321 sdai: ios=802/807, merge=0/0, ticks=14115/62225, in_queue=76341, util=77.82% 00:14:06.321 sdak: ios=646/800, merge=0/0, ticks=10719/65992, in_queue=76711, util=77.96% 00:14:06.321 sdal: ios=802/860, merge=0/0, ticks=12049/63487, in_queue=75536, util=78.07% 00:14:06.321 sdan: ios=997/960, merge=0/0, ticks=9414/67654, in_queue=77068, util=78.30% 00:14:06.321 sdap: ios=895/960, merge=0/0, ticks=10951/65547, in_queue=76498, util=78.69% 00:14:06.321 sdam: ios=480/506, merge=0/0, ticks=9369/67893, in_queue=77263, util=78.82% 00:14:06.321 sdao: ios=481/633, merge=0/0, ticks=9422/67463, in_queue=76885, util=79.47% 00:14:06.321 sdaq: ios=480/480, merge=0/0, ticks=14062/61623, in_queue=75686, util=79.46% 00:14:06.321 sdav: ios=642/640, merge=0/0, ticks=10073/66797, in_queue=76871, util=79.93% 00:14:06.321 sdaz: ios=480/596, merge=0/0, ticks=10994/64301, in_queue=75295, util=79.77% 00:14:06.321 sdbc: ios=480/562, merge=0/0, ticks=10396/64253, in_queue=74649, util=79.89% 00:14:06.321 sdbf: ios=553/640, merge=0/0, ticks=11930/65079, in_queue=77009, util=80.47% 00:14:06.321 sdbi: ios=480/635, merge=0/0, ticks=13420/62780, in_queue=76201, util=80.54% 00:14:06.321 sdbn: ios=481/606, merge=0/0, ticks=12923/62954, in_queue=75877, util=80.52% 00:14:06.321 sdbq: ios=627/640, merge=0/0, ticks=7323/70210, in_queue=77534, util=81.04% 00:14:06.321 sdas: ios=641/642, merge=0/0, ticks=10788/63875, in_queue=74663, util=80.83% 00:14:06.321 sdau: ios=600/640, merge=0/0, ticks=14261/62275, in_queue=76536, util=80.84% 00:14:06.321 sdax: ios=480/624, merge=0/0, ticks=10063/65428, in_queue=75492, util=81.07% 00:14:06.321 sdba: ios=480/594, merge=0/0, ticks=5649/71192, in_queue=76841, util=81.15% 00:14:06.321 sdbd: ios=640/640, merge=0/0, ticks=13713/62368, in_queue=76082, util=81.64% 00:14:06.321 sdbg: ios=501/640, merge=0/0, ticks=12064/63806, in_queue=75871, util=81.80% 00:14:06.321 sdbj: ios=481/620, merge=0/0, ticks=9045/67451, in_queue=76497, util=81.93% 00:14:06.321 sdbl: ios=480/599, merge=0/0, ticks=4530/72086, in_queue=76616, util=82.32% 00:14:06.321 sdbo: ios=480/585, merge=0/0, ticks=8799/66947, in_queue=75747, util=82.86% 00:14:06.321 sdbr: ios=480/525, merge=0/0, ticks=11475/65049, in_queue=76524, util=83.16% 00:14:06.321 sdar: ios=802/928, merge=0/0, ticks=12372/64188, in_queue=76561, util=83.70% 00:14:06.321 sdat: ios=802/812, merge=0/0, ticks=10700/65626, in_queue=76327, util=83.75% 00:14:06.321 sdaw: ios=802/903, merge=0/0, ticks=9196/66610, in_queue=75806, util=83.84% 00:14:06.321 sday: ios=802/800, merge=0/0, ticks=13549/62259, in_queue=75808, util=84.75% 00:14:06.321 sdbb: ios=640/773, merge=0/0, ticks=9865/67642, in_queue=77507, util=84.73% 00:14:06.321 sdbe: ios=835/878, merge=0/0, ticks=11829/65504, in_queue=77334, util=85.30% 00:14:06.321 sdbh: ios=838/935, merge=0/0, ticks=10420/67302, in_queue=77723, util=85.68% 00:14:06.321 sdbk: ios=802/907, merge=0/0, ticks=11516/64596, in_queue=76112, util=85.28% 00:14:06.321 sdbm: ios=641/771, merge=0/0, ticks=12148/64494, in_queue=76642, util=85.64% 00:14:06.321 sdbp: ios=802/889, merge=0/0, ticks=10307/66001, in_queue=76309, util=86.31% 00:14:06.321 sdbs: ios=802/816, merge=0/0, ticks=10401/65477, in_queue=75879, util=86.29% 00:14:06.321 sdbt: ios=723/800, merge=0/0, ticks=12582/64465, in_queue=77048, util=86.59% 00:14:06.321 sdbu: ios=802/955, merge=0/0, ticks=11199/64619, in_queue=75818, util=86.51% 00:14:06.321 sdbx: ios=801/838, merge=0/0, ticks=11049/65088, in_queue=76138, util=86.88% 00:14:06.321 sdbz: ios=802/918, merge=0/0, ticks=12031/63326, in_queue=75358, util=86.99% 00:14:06.321 sdcc: ios=802/934, merge=0/0, ticks=10620/65848, in_queue=76468, util=87.23% 00:14:06.321 sdcg: ios=927/960, merge=0/0, ticks=11985/64251, in_queue=76237, util=87.36% 00:14:06.321 sdcl: ios=802/916, merge=0/0, ticks=13227/62525, in_queue=75752, util=87.72% 00:14:06.321 sdco: ios=800/800, merge=0/0, ticks=12058/64286, in_queue=76344, util=87.71% 00:14:06.321 sdcr: ios=841/941, merge=0/0, ticks=10642/66820, in_queue=77463, util=88.98% 00:14:06.321 sdbv: ios=343/480, merge=0/0, ticks=17543/59361, in_queue=76904, util=88.68% 00:14:06.321 sdby: ios=481/632, merge=0/0, ticks=11914/64409, in_queue=76323, util=89.61% 00:14:06.321 sdcb: ios=480/588, merge=0/0, ticks=8560/67298, in_queue=75858, util=89.78% 00:14:06.321 sdcd: ios=481/594, merge=0/0, ticks=10737/64808, in_queue=75545, util=89.95% 00:14:06.321 sdcf: ios=480/617, merge=0/0, ticks=11780/63858, in_queue=75638, util=91.06% 00:14:06.321 sdci: ios=480/552, merge=0/0, ticks=10151/66083, in_queue=76235, util=90.99% 00:14:06.321 sdcj: ios=481/620, merge=0/0, ticks=9595/66187, in_queue=75782, util=91.25% 00:14:06.321 sdcn: ios=480/630, merge=0/0, ticks=11849/64286, in_queue=76136, util=91.83% 00:14:06.321 sdcp: ios=480/584, merge=0/0, ticks=10287/65618, in_queue=75906, util=92.11% 00:14:06.321 sdct: ios=480/635, merge=0/0, ticks=10317/64399, in_queue=74717, util=92.34% 00:14:06.321 sdbw: ios=641/640, merge=0/0, ticks=12197/64459, in_queue=76657, util=92.62% 00:14:06.321 sdca: ios=642/647, merge=0/0, ticks=10876/65535, in_queue=76411, util=93.20% 00:14:06.321 sdce: ios=624/640, merge=0/0, ticks=12360/64148, in_queue=76508, util=93.06% 00:14:06.321 sdch: ios=511/640, merge=0/0, ticks=11019/65295, in_queue=76315, util=93.83% 00:14:06.321 sdck: ios=642/649, merge=0/0, ticks=10313/65790, in_queue=76104, util=94.13% 00:14:06.321 sdcm: ios=481/624, merge=0/0, ticks=10475/66115, in_queue=76591, util=94.44% 00:14:06.321 sdcq: ios=481/636, merge=0/0, ticks=9791/66607, in_queue=76399, util=94.41% 00:14:06.321 sdcs: ios=480/518, merge=0/0, ticks=15134/61214, in_queue=76348, util=94.63% 00:14:06.321 sdcu: ios=480/510, merge=0/0, ticks=10156/66204, in_queue=76361, util=95.36% 00:14:06.321 sdcv: ios=641/640, merge=0/0, ticks=12395/64125, in_queue=76521, util=95.52% 00:14:06.321 sda: ios=480/616, merge=0/0, ticks=9558/66870, in_queue=76429, util=96.11% 00:14:06.321 sdb: ios=480/614, merge=0/0, ticks=11218/64157, in_queue=75376, util=96.18% 00:14:06.321 sde: ios=481/564, merge=0/0, ticks=11056/65217, in_queue=76274, util=96.80% 00:14:06.321 sdj: ios=481/627, merge=0/0, ticks=11635/64167, in_queue=75803, util=97.16% 00:14:06.321 sdo: ios=480/486, merge=0/0, ticks=13003/63736, in_queue=76740, util=97.21% 00:14:06.321 sdr: ios=481/615, merge=0/0, ticks=11668/64257, in_queue=75925, util=97.93% 00:14:06.321 sdu: ios=480/480, merge=0/0, ticks=15866/61240, in_queue=77107, util=97.95% 00:14:06.321 sdy: ios=481/640, merge=0/0, ticks=8728/68255, in_queue=76983, util=98.20% 00:14:06.321 sdac: ios=520/640, merge=0/0, ticks=10822/65464, in_queue=76286, util=98.29% 00:14:06.321 sdag: ios=572/640, merge=0/0, ticks=12968/64055, in_queue=77023, util=98.96% 00:14:06.321 [2024-07-22 17:18:25.101389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.103674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.105982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.108227] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.110512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.112673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.115069] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.118968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.121243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:06.321 [2024-07-22 17:18:25.123650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.125650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.127874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.129897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.132167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.134545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.136608] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.139107] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.142270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.144534] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.146726] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.148845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.150965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:14:06.321 [2024-07-22 17:18:25.153332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:14:06.321 [2024-07-22 17:18:25.155347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:14:06.321 Cleaning up iSCSI connection 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:14:06.321 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:14:06.321 [2024-07-22 17:18:25.157708] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.160120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.162643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.164967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.167754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.321 [2024-07-22 17:18:25.170302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.172332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.175757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.179126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.181557] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.183772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.186031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.187949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.190016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.192111] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.194051] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.196044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.197992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.200311] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.204020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.209070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.215340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.217793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.219807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.221963] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.223974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.226733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.229320] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.231807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.234073] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.236151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.322 [2024-07-22 17:18:25.238450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.240933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.243357] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.247322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.253876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.258471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.264307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.266404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.268238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.271747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.276105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.280989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.286974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.288855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.291263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.295863] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.297806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.299765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.303161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.310791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.318203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.321372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.324980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.328362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.332170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.334325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.337360] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.341482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.344812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.347461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.351436] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.353803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.356950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.580 [2024-07-22 17:18:25.360992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:06.839 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:14:06.839 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:14:06.839 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:14:06.839 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:14:06.839 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:14:06.839 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:14:06.839 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 68524 00:14:06.839 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 68524 ']' 00:14:06.839 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 68524 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68524 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.840 killing process with pid 68524 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68524' 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 68524 00:14:06.840 17:18:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 68524 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:13.479 00:14:13.479 real 1m8.208s 00:14:13.479 user 4m40.140s 00:14:13.479 sys 0m25.795s 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 ************************************ 00:14:13.479 END TEST iscsi_tgt_iscsi_lvol 00:14:13.479 ************************************ 00:14:13.479 17:18:31 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:14:13.479 17:18:31 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:14:13.479 17:18:31 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:13.479 17:18:31 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.479 17:18:31 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 ************************************ 00:14:13.479 START TEST iscsi_tgt_fio 00:14:13.479 ************************************ 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:14:13.479 * Looking for test storage... 00:14:13.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 Process pid: 73249 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=73249 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 73249' 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 73249 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 73249 ']' 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.479 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.480 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.480 17:18:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:13.480 [2024-07-22 17:18:32.017194] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:13.480 [2024-07-22 17:18:32.018138] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73249 ] 00:14:13.480 [2024-07-22 17:18:32.194446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.737 [2024-07-22 17:18:32.446515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.995 17:18:32 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.995 17:18:32 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:14:13.995 17:18:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:14:14.926 iscsi_tgt is listening. Running tests... 00:14:14.926 17:18:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:14.926 17:18:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:14:14.926 17:18:33 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.926 17:18:33 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:14:15.183 17:18:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:14:15.441 17:18:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:15.699 17:18:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:14:15.956 17:18:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:14:15.956 17:18:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:14:16.214 17:18:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:14:16.214 17:18:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:16.472 17:18:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:14:17.846 17:18:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:14:17.846 17:18:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:14:18.139 17:18:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:14:19.073 17:18:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:14:19.331 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:14:19.331 [2024-07-22 17:18:38.082736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.331 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:14:19.331 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:14:19.331 [2024-07-22 17:18:38.095514] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:14:19.331 17:18:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:14:19.331 [global] 00:14:19.331 thread=1 00:14:19.331 invalidate=1 00:14:19.331 rw=randrw 00:14:19.331 time_based=1 00:14:19.331 runtime=1 00:14:19.331 ioengine=libaio 00:14:19.331 direct=1 00:14:19.331 bs=4096 00:14:19.331 iodepth=1 00:14:19.331 norandommap=0 00:14:19.331 numjobs=1 00:14:19.331 00:14:19.331 verify_dump=1 00:14:19.331 verify_backlog=512 00:14:19.331 verify_state_save=0 00:14:19.331 do_verify=1 00:14:19.331 verify=crc32c-intel 00:14:19.331 [job0] 00:14:19.331 filename=/dev/sda 00:14:19.331 [job1] 00:14:19.331 filename=/dev/sdb 00:14:19.331 queue_depth set to 113 (sda) 00:14:19.331 queue_depth set to 113 (sdb) 00:14:19.589 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.589 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.589 fio-3.35 00:14:19.589 Starting 2 threads 00:14:19.589 [2024-07-22 17:18:38.305375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.589 [2024-07-22 17:18:38.308516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.523 [2024-07-22 17:18:39.418888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.523 [2024-07-22 17:18:39.421957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.523 00:14:20.523 job0: (groupid=0, jobs=1): err= 0: pid=73401: Mon Jul 22 17:18:39 2024 00:14:20.523 read: IOPS=3604, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:14:20.523 slat (nsec): min=3524, max=49270, avg=8014.93, stdev=3280.80 00:14:20.523 clat (usec): min=111, max=410, avg=165.50, stdev=24.38 00:14:20.523 lat (usec): min=118, max=458, avg=173.51, stdev=25.66 00:14:20.523 clat percentiles (usec): 00:14:20.523 | 1.00th=[ 127], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:14:20.523 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:14:20.523 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 198], 95.00th=[ 215], 00:14:20.523 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 314], 00:14:20.523 | 99.99th=[ 412] 00:14:20.523 bw ( KiB/s): min= 7728, max= 7728, per=26.55%, avg=7728.00, stdev= 0.00, samples=1 00:14:20.523 iops : min= 1932, max= 1932, avg=1932.00, stdev= 0.00, samples=1 00:14:20.523 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:20.523 slat (usec): min=4, max=129, avg= 9.64, stdev= 4.39 00:14:20.523 clat (usec): min=111, max=355, avg=168.91, stdev=29.86 00:14:20.523 lat (usec): min=119, max=362, avg=178.55, stdev=31.67 00:14:20.523 clat percentiles (usec): 00:14:20.523 | 1.00th=[ 122], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:14:20.523 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 167], 00:14:20.523 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 227], 00:14:20.523 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 326], 00:14:20.523 | 99.99th=[ 355] 00:14:20.523 bw ( KiB/s): min= 8175, max= 8175, per=49.95%, avg=8175.00, stdev= 0.00, samples=1 00:14:20.523 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:14:20.523 lat (usec) : 250=98.78%, 500=1.22% 00:14:20.523 cpu : usr=2.10%, sys=6.90%, ctx=5656, majf=0, minf=9 00:14:20.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.523 issued rwts: total=3608,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.523 job1: (groupid=0, jobs=1): err= 0: pid=73402: Mon Jul 22 17:18:39 2024 00:14:20.523 read: IOPS=3673, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1001msec) 00:14:20.523 slat (nsec): min=3346, max=47813, avg=6515.10, stdev=3451.83 00:14:20.523 clat (usec): min=91, max=844, avg=160.81, stdev=27.15 00:14:20.523 lat (usec): min=97, max=863, avg=167.33, stdev=28.46 00:14:20.523 clat percentiles (usec): 00:14:20.523 | 1.00th=[ 95], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 149], 00:14:20.523 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:14:20.523 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 200], 00:14:20.523 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 351], 99.95th=[ 652], 00:14:20.523 | 99.99th=[ 848] 00:14:20.523 bw ( KiB/s): min= 7497, max= 7497, per=25.75%, avg=7497.00, stdev= 0.00, samples=1 00:14:20.523 iops : min= 1874, max= 1874, avg=1874.00, stdev= 0.00, samples=1 00:14:20.523 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:20.523 slat (nsec): min=4285, max=35415, avg=8803.26, stdev=4978.94 00:14:20.523 clat (usec): min=99, max=2666, avg=175.28, stdev=62.64 00:14:20.523 lat (usec): min=111, max=2680, avg=184.09, stdev=64.11 00:14:20.523 clat percentiles (usec): 00:14:20.523 | 1.00th=[ 113], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:14:20.523 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 172], 00:14:20.523 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 212], 95.00th=[ 231], 00:14:20.523 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 338], 00:14:20.523 | 99.99th=[ 2671] 00:14:20.523 bw ( KiB/s): min= 8175, max= 8175, per=49.95%, avg=8175.00, stdev= 0.00, samples=1 00:14:20.523 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:14:20.523 lat (usec) : 100=1.28%, 250=97.21%, 500=1.47%, 750=0.02%, 1000=0.02% 00:14:20.523 lat (msec) : 4=0.02% 00:14:20.523 cpu : usr=2.30%, sys=5.40%, ctx=5725, majf=0, minf=11 00:14:20.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.523 issued rwts: total=3677,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.523 00:14:20.523 Run status group 0 (all jobs): 00:14:20.523 READ: bw=28.4MiB/s (29.8MB/s), 14.1MiB/s-14.3MiB/s (14.8MB/s-15.0MB/s), io=28.5MiB (29.8MB), run=1001-1001msec 00:14:20.523 WRITE: bw=16.0MiB/s (16.8MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:14:20.523 00:14:20.523 Disk stats (read/write): 00:14:20.523 sda: ios=3268/1751, merge=0/0, ticks=548/296, in_queue=844, util=90.70% 00:14:20.523 sdb: ios=3282/1809, merge=0/0, ticks=531/312, in_queue=844, util=90.97% 00:14:20.523 17:18:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:14:20.781 [global] 00:14:20.781 thread=1 00:14:20.781 invalidate=1 00:14:20.781 rw=randrw 00:14:20.781 time_based=1 00:14:20.781 runtime=1 00:14:20.781 ioengine=libaio 00:14:20.781 direct=1 00:14:20.781 bs=131072 00:14:20.781 iodepth=32 00:14:20.781 norandommap=0 00:14:20.781 numjobs=1 00:14:20.781 00:14:20.781 verify_dump=1 00:14:20.781 verify_backlog=512 00:14:20.781 verify_state_save=0 00:14:20.781 do_verify=1 00:14:20.781 verify=crc32c-intel 00:14:20.781 [job0] 00:14:20.781 filename=/dev/sda 00:14:20.781 [job1] 00:14:20.781 filename=/dev/sdb 00:14:20.781 queue_depth set to 113 (sda) 00:14:20.781 queue_depth set to 113 (sdb) 00:14:20.781 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:14:20.781 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:14:20.781 fio-3.35 00:14:20.781 Starting 2 threads 00:14:20.781 [2024-07-22 17:18:39.626396] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:20.781 [2024-07-22 17:18:39.629879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.156 [2024-07-22 17:18:40.726178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.156 [2024-07-22 17:18:40.764457] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.156 [2024-07-22 17:18:40.767840] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.156 00:14:22.156 job0: (groupid=0, jobs=1): err= 0: pid=73464: Mon Jul 22 17:18:40 2024 00:14:22.156 read: IOPS=1451, BW=181MiB/s (190MB/s)(183MiB/1009msec) 00:14:22.156 slat (usec): min=7, max=186, avg=26.64, stdev=12.54 00:14:22.156 clat (usec): min=1565, max=23256, avg=6243.59, stdev=4203.27 00:14:22.156 lat (usec): min=1589, max=23286, avg=6270.24, stdev=4202.01 00:14:22.156 clat percentiles (usec): 00:14:22.156 | 1.00th=[ 1696], 5.00th=[ 1860], 10.00th=[ 1942], 20.00th=[ 2089], 00:14:22.156 | 30.00th=[ 2245], 40.00th=[ 2507], 50.00th=[ 6194], 60.00th=[ 8717], 00:14:22.156 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10945], 95.00th=[12125], 00:14:22.156 | 99.00th=[16909], 99.50th=[19006], 99.90th=[22676], 99.95th=[23200], 00:14:22.156 | 99.99th=[23200] 00:14:22.156 bw ( KiB/s): min=71936, max=122112, per=28.10%, avg=97024.00, stdev=35479.79, samples=2 00:14:22.156 iops : min= 562, max= 954, avg=758.00, stdev=277.19, samples=2 00:14:22.156 write: IOPS=817, BW=102MiB/s (107MB/s)(97.2MiB/952msec); 0 zone resets 00:14:22.156 slat (usec): min=33, max=475, avg=88.07, stdev=23.95 00:14:22.156 clat (usec): min=4850, max=46723, avg=29396.39, stdev=4250.56 00:14:22.156 lat (usec): min=4972, max=46812, avg=29484.46, stdev=4252.38 00:14:22.156 clat percentiles (usec): 00:14:22.156 | 1.00th=[12256], 5.00th=[21103], 10.00th=[26870], 20.00th=[28181], 00:14:22.156 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29754], 60.00th=[30016], 00:14:22.156 | 70.00th=[30540], 80.00th=[31327], 90.00th=[32375], 95.00th=[33424], 00:14:22.156 | 99.00th=[43254], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:14:22.156 | 99.99th=[46924] 00:14:22.156 bw ( KiB/s): min=75008, max=124160, per=49.69%, avg=99584.00, stdev=34755.71, samples=2 00:14:22.156 iops : min= 586, max= 970, avg=778.00, stdev=271.53, samples=2 00:14:22.156 lat (msec) : 2=8.56%, 4=22.02%, 10=21.09%, 20=14.44%, 50=33.88% 00:14:22.156 cpu : usr=9.92%, sys=6.15%, ctx=1574, majf=0, minf=5 00:14:22.156 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:14:22.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.156 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:14:22.156 issued rwts: total=1465,778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.156 latency : target=0, window=0, percentile=100.00%, depth=32 00:14:22.156 job1: (groupid=0, jobs=1): err= 0: pid=73467: Mon Jul 22 17:18:40 2024 00:14:22.156 read: IOPS=1251, BW=156MiB/s (164MB/s)(159MiB/1013msec) 00:14:22.156 slat (usec): min=6, max=2867, avg=23.76, stdev=93.06 00:14:22.156 clat (usec): min=1528, max=18980, avg=6256.28, stdev=4629.86 00:14:22.156 lat (usec): min=1542, max=18998, avg=6280.04, stdev=4635.70 00:14:22.156 clat percentiles (usec): 00:14:22.156 | 1.00th=[ 1647], 5.00th=[ 1795], 10.00th=[ 1893], 20.00th=[ 2057], 00:14:22.156 | 30.00th=[ 2212], 40.00th=[ 2376], 50.00th=[ 3294], 60.00th=[ 7832], 00:14:22.156 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12125], 95.00th=[13304], 00:14:22.156 | 99.00th=[18220], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:14:22.156 | 99.99th=[19006] 00:14:22.156 bw ( KiB/s): min=75264, max=117226, per=27.87%, avg=96245.00, stdev=29671.61, samples=2 00:14:22.156 iops : min= 588, max= 915, avg=751.50, stdev=231.22, samples=2 00:14:22.156 write: IOPS=797, BW=99.7MiB/s (105MB/s)(101MiB/1013msec); 0 zone resets 00:14:22.156 slat (usec): min=30, max=802, avg=56.31, stdev=36.09 00:14:22.156 clat (usec): min=11847, max=58043, avg=30146.63, stdev=4542.99 00:14:22.156 lat (usec): min=11892, max=58095, avg=30202.93, stdev=4544.52 00:14:22.156 clat percentiles (usec): 00:14:22.156 | 1.00th=[17433], 5.00th=[24249], 10.00th=[26870], 20.00th=[28443], 00:14:22.156 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30278], 00:14:22.156 | 70.00th=[30802], 80.00th=[31851], 90.00th=[32900], 95.00th=[36439], 00:14:22.156 | 99.00th=[49021], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:14:22.156 | 99.99th=[57934] 00:14:22.156 bw ( KiB/s): min=75264, max=124921, per=49.95%, avg=100092.50, stdev=35112.80, samples=2 00:14:22.156 iops : min= 588, max= 975, avg=781.50, stdev=273.65, samples=2 00:14:22.156 lat (msec) : 2=10.12%, 4=21.44%, 10=8.09%, 20=22.16%, 50=37.91% 00:14:22.156 lat (msec) : 100=0.29% 00:14:22.156 cpu : usr=4.94%, sys=3.56%, ctx=1989, majf=0, minf=5 00:14:22.156 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0% 00:14:22.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:14:22.156 issued rwts: total=1268,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.156 latency : target=0, window=0, percentile=100.00%, depth=32 00:14:22.156 00:14:22.156 Run status group 0 (all jobs): 00:14:22.156 READ: bw=337MiB/s (354MB/s), 156MiB/s-181MiB/s (164MB/s-190MB/s), io=342MiB (358MB), run=1009-1013msec 00:14:22.156 WRITE: bw=196MiB/s (205MB/s), 99.7MiB/s-102MiB/s (105MB/s-107MB/s), io=198MiB (208MB), run=952-1013msec 00:14:22.156 00:14:22.156 Disk stats (read/write): 00:14:22.156 sda: ios=1247/700, merge=0/0, ticks=6863/20545, in_queue=27408, util=90.18% 00:14:22.156 sdb: ios=1212/680, merge=0/0, ticks=6946/20239, in_queue=27184, util=90.14% 00:14:22.156 17:18:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:14:22.156 [global] 00:14:22.157 thread=1 00:14:22.157 invalidate=1 00:14:22.157 rw=randrw 00:14:22.157 time_based=1 00:14:22.157 runtime=1 00:14:22.157 ioengine=libaio 00:14:22.157 direct=1 00:14:22.157 bs=524288 00:14:22.157 iodepth=128 00:14:22.157 norandommap=0 00:14:22.157 numjobs=1 00:14:22.157 00:14:22.157 verify_dump=1 00:14:22.157 verify_backlog=512 00:14:22.157 verify_state_save=0 00:14:22.157 do_verify=1 00:14:22.157 verify=crc32c-intel 00:14:22.157 [job0] 00:14:22.157 filename=/dev/sda 00:14:22.157 [job1] 00:14:22.157 filename=/dev/sdb 00:14:22.157 queue_depth set to 113 (sda) 00:14:22.157 queue_depth set to 113 (sdb) 00:14:22.157 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:14:22.157 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:14:22.157 fio-3.35 00:14:22.157 Starting 2 threads 00:14:22.157 [2024-07-22 17:18:40.969693] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.157 [2024-07-22 17:18:40.973258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:23.535 [2024-07-22 17:18:42.237052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:23.535 [2024-07-22 17:18:42.240330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:23.535 00:14:23.535 job0: (groupid=0, jobs=1): err= 0: pid=73545: Mon Jul 22 17:18:42 2024 00:14:23.535 read: IOPS=222, BW=111MiB/s (117MB/s)(121MiB/1087msec) 00:14:23.535 slat (usec): min=22, max=17521, avg=1776.41, stdev=3173.11 00:14:23.535 clat (msec): min=131, max=357, avg=243.92, stdev=45.99 00:14:23.535 lat (msec): min=131, max=357, avg=245.69, stdev=46.19 00:14:23.535 clat percentiles (msec): 00:14:23.535 | 1.00th=[ 133], 5.00th=[ 144], 10.00th=[ 176], 20.00th=[ 205], 00:14:23.535 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 259], 00:14:23.535 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 296], 00:14:23.535 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 359], 99.95th=[ 359], 00:14:23.535 | 99.99th=[ 359] 00:14:23.535 bw ( KiB/s): min=94208, max=107520, per=40.73%, avg=100864.00, stdev=9413.01, samples=2 00:14:23.535 iops : min= 184, max= 210, avg=197.00, stdev=18.38, samples=2 00:14:23.535 write: IOPS=248, BW=124MiB/s (130MB/s)(135MiB/1087msec); 0 zone resets 00:14:23.535 slat (usec): min=150, max=17157, avg=1929.23, stdev=3075.97 00:14:23.535 clat (msec): min=131, max=433, avg=274.82, stdev=56.85 00:14:23.535 lat (msec): min=131, max=441, avg=276.75, stdev=57.31 00:14:23.535 clat percentiles (msec): 00:14:23.535 | 1.00th=[ 148], 5.00th=[ 169], 10.00th=[ 188], 20.00th=[ 234], 00:14:23.535 | 30.00th=[ 255], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 292], 00:14:23.535 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 338], 95.00th=[ 380], 00:14:23.535 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:14:23.535 | 99.99th=[ 435] 00:14:23.535 bw ( KiB/s): min=84992, max=107520, per=35.59%, avg=96256.00, stdev=15929.70, samples=2 00:14:23.535 iops : min= 166, max= 210, avg=188.00, stdev=31.11, samples=2 00:14:23.535 lat (msec) : 250=36.33%, 500=63.67% 00:14:23.535 cpu : usr=6.45%, sys=2.85%, ctx=229, majf=0, minf=5 00:14:23.535 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:14:23.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.535 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:14:23.535 issued rwts: total=242,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.535 job1: (groupid=0, jobs=1): err= 0: pid=73548: Mon Jul 22 17:18:42 2024 00:14:23.535 read: IOPS=263, BW=132MiB/s (138MB/s)(145MiB/1100msec) 00:14:23.535 slat (usec): min=21, max=10376, avg=1687.64, stdev=2936.29 00:14:23.535 clat (msec): min=100, max=303, avg=209.61, stdev=35.05 00:14:23.535 lat (msec): min=100, max=311, avg=211.29, stdev=35.03 00:14:23.535 clat percentiles (msec): 00:14:23.535 | 1.00th=[ 102], 5.00th=[ 131], 10.00th=[ 171], 20.00th=[ 192], 00:14:23.535 | 30.00th=[ 199], 40.00th=[ 207], 50.00th=[ 213], 60.00th=[ 220], 00:14:23.535 | 70.00th=[ 226], 80.00th=[ 234], 90.00th=[ 249], 95.00th=[ 259], 00:14:23.535 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:14:23.535 | 99.99th=[ 305] 00:14:23.535 bw ( KiB/s): min=89088, max=141312, per=46.52%, avg=115200.00, stdev=36927.94, samples=2 00:14:23.535 iops : min= 174, max= 276, avg=225.00, stdev=72.12, samples=2 00:14:23.535 write: IOPS=282, BW=141MiB/s (148MB/s)(156MiB/1100msec); 0 zone resets 00:14:23.535 slat (usec): min=148, max=14334, avg=1652.98, stdev=2803.65 00:14:23.535 clat (msec): min=91, max=328, avg=233.56, stdev=37.75 00:14:23.535 lat (msec): min=100, max=335, avg=235.21, stdev=38.01 00:14:23.535 clat percentiles (msec): 00:14:23.535 | 1.00th=[ 110], 5.00th=[ 155], 10.00th=[ 190], 20.00th=[ 218], 00:14:23.535 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 239], 60.00th=[ 243], 00:14:23.535 | 70.00th=[ 251], 80.00th=[ 257], 90.00th=[ 268], 95.00th=[ 292], 00:14:23.535 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:14:23.535 | 99.99th=[ 330] 00:14:23.535 bw ( KiB/s): min=95232, max=158720, per=46.95%, avg=126976.00, stdev=44892.80, samples=2 00:14:23.535 iops : min= 186, max= 310, avg=248.00, stdev=87.68, samples=2 00:14:23.535 lat (msec) : 100=0.17%, 250=80.20%, 500=19.63% 00:14:23.535 cpu : usr=7.73%, sys=2.82%, ctx=232, majf=0, minf=9 00:14:23.535 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:14:23.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.535 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:23.535 issued rwts: total=290,311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:23.535 00:14:23.535 Run status group 0 (all jobs): 00:14:23.535 READ: bw=242MiB/s (254MB/s), 111MiB/s-132MiB/s (117MB/s-138MB/s), io=266MiB (279MB), run=1087-1100msec 00:14:23.535 WRITE: bw=264MiB/s (277MB/s), 124MiB/s-141MiB/s (130MB/s-148MB/s), io=291MiB (305MB), run=1087-1100msec 00:14:23.535 00:14:23.535 Disk stats (read/write): 00:14:23.535 sda: ios=290/269, merge=0/0, ticks=22677/33035, in_queue=55712, util=82.07% 00:14:23.535 sdb: ios=331/301, merge=0/0, ticks=23180/33040, in_queue=56220, util=84.70% 00:14:23.535 17:18:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:14:23.535 [global] 00:14:23.535 thread=1 00:14:23.535 invalidate=1 00:14:23.535 rw=read 00:14:23.535 time_based=1 00:14:23.535 runtime=1 00:14:23.535 ioengine=libaio 00:14:23.535 direct=1 00:14:23.535 bs=1048576 00:14:23.535 iodepth=1024 00:14:23.535 norandommap=1 00:14:23.535 numjobs=4 00:14:23.535 00:14:23.535 [job0] 00:14:23.535 filename=/dev/sda 00:14:23.535 [job1] 00:14:23.535 filename=/dev/sdb 00:14:23.535 queue_depth set to 113 (sda) 00:14:23.535 queue_depth set to 113 (sdb) 00:14:23.535 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:14:23.535 ... 00:14:23.535 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:14:23.535 ... 00:14:23.535 fio-3.35 00:14:23.535 Starting 8 threads 00:14:35.747 00:14:35.747 job0: (groupid=0, jobs=1): err= 0: pid=73609: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=2, BW=2330KiB/s (2386kB/s)(26.0MiB/11426msec) 00:14:35.747 slat (usec): min=488, max=1442.9k, avg=56757.19, stdev=282723.27 00:14:35.747 clat (msec): min=9949, max=11424, avg=11355.47, stdev=286.89 00:14:35.747 lat (msec): min=11392, max=11425, avg=11412.23, stdev=11.20 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[10000], 5.00th=[11342], 10.00th=[11342], 20.00th=[11342], 00:14:35.747 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11476], 60.00th=[11476], 00:14:35.747 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.747 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.747 | 99.99th=[11476] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.14%, ctx=22, majf=0, minf=6657 00:14:35.747 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.747 job0: (groupid=0, jobs=1): err= 0: pid=73610: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=0, BW=449KiB/s (460kB/s)(5120KiB/11404msec) 00:14:35.747 slat (usec): min=597, max=1442.8k, avg=290169.45, stdev=644336.81 00:14:35.747 clat (msec): min=9952, max=11397, avg=11107.32, stdev=645.71 00:14:35.747 lat (msec): min=11395, max=11403, avg=11397.49, stdev= 3.24 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[10000], 5.00th=[10000], 10.00th=[10000], 20.00th=[10000], 00:14:35.747 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11342], 00:14:35.747 | 70.00th=[11342], 80.00th=[11342], 90.00th=[11342], 95.00th=[11342], 00:14:35.747 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:14:35.747 | 99.99th=[11342] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.03%, ctx=12, majf=0, minf=1281 00:14:35.747 IO depths : 1=20.0%, 2=40.0%, 4=40.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.747 job0: (groupid=0, jobs=1): err= 0: pid=73611: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=0, BW=985KiB/s (1009kB/s)(11.0MiB/11432msec) 00:14:35.747 slat (usec): min=570, max=1443.0k, avg=133192.85, stdev=434419.65 00:14:35.747 clat (msec): min=9966, max=11424, avg=11283.69, stdev=436.84 00:14:35.747 lat (msec): min=11409, max=11431, avg=11416.88, stdev= 7.38 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[10000], 5.00th=[10000], 10.00th=[11476], 20.00th=[11476], 00:14:35.747 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:14:35.747 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.747 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.747 | 99.99th=[11476] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.06%, ctx=18, majf=0, minf=2817 00:14:35.747 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.747 job0: (groupid=0, jobs=1): err= 0: pid=73612: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=3, BW=3221KiB/s (3298kB/s)(36.0MiB/11446msec) 00:14:35.747 slat (usec): min=411, max=1442.9k, avg=41132.22, stdev=240310.37 00:14:35.747 clat (msec): min=9964, max=11444, avg=11383.95, stdev=243.64 00:14:35.747 lat (msec): min=11407, max=11445, avg=11425.08, stdev=12.89 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[10000], 5.00th=[11342], 10.00th=[11342], 20.00th=[11476], 00:14:35.747 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:14:35.747 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.747 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.747 | 99.99th=[11476] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.17%, ctx=35, majf=0, minf=9217 00:14:35.747 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:35.747 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.747 job1: (groupid=0, jobs=1): err= 0: pid=73613: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=0, BW=359KiB/s (368kB/s)(4096KiB/11396msec) 00:14:35.747 slat (usec): min=695, max=3287.6k, avg=822466.08, stdev=1643433.70 00:14:35.747 clat (msec): min=8105, max=11394, avg=10571.65, stdev=1644.17 00:14:35.747 lat (usec): min=11393k, max=11395k, avg=11394117.62, stdev=974.51 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[ 8087], 5.00th=[ 8087], 10.00th=[ 8087], 20.00th=[ 8087], 00:14:35.747 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11342], 00:14:35.747 | 70.00th=[11342], 80.00th=[11342], 90.00th=[11342], 95.00th=[11342], 00:14:35.747 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:14:35.747 | 99.99th=[11342] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.03%, ctx=10, majf=0, minf=1025 00:14:35.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.747 job1: (groupid=0, jobs=1): err= 0: pid=73614: Mon Jul 22 17:18:54 2024 00:14:35.747 read: IOPS=0, BW=717KiB/s (734kB/s)(8192KiB/11426msec) 00:14:35.747 slat (usec): min=684, max=3287.8k, avg=411865.31, stdev=1162044.01 00:14:35.747 clat (msec): min=8130, max=11422, avg=11009.03, stdev=1163.29 00:14:35.747 lat (msec): min=11417, max=11424, avg=11420.89, stdev= 2.38 00:14:35.747 clat percentiles (msec): 00:14:35.747 | 1.00th=[ 8154], 5.00th=[ 8154], 10.00th=[ 8154], 20.00th=[11476], 00:14:35.747 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:14:35.747 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.747 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.747 | 99.99th=[11476] 00:14:35.747 lat (msec) : >=2000=100.00% 00:14:35.747 cpu : usr=0.00%, sys=0.05%, ctx=12, majf=0, minf=2049 00:14:35.747 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.748 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.748 job1: (groupid=0, jobs=1): err= 0: pid=73615: Mon Jul 22 17:18:54 2024 00:14:35.748 read: IOPS=2, BW=2412KiB/s (2470kB/s)(27.0MiB/11461msec) 00:14:35.748 slat (usec): min=606, max=3287.4k, avg=123160.52, stdev=632372.34 00:14:35.748 clat (msec): min=8134, max=11459, avg=11314.69, stdev=635.62 00:14:35.748 lat (msec): min=11422, max=11460, avg=11437.85, stdev=12.75 00:14:35.748 clat percentiles (msec): 00:14:35.748 | 1.00th=[ 8154], 5.00th=[11476], 10.00th=[11476], 20.00th=[11476], 00:14:35.748 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:14:35.748 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.748 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.748 | 99.99th=[11476] 00:14:35.748 lat (msec) : >=2000=100.00% 00:14:35.748 cpu : usr=0.00%, sys=0.16%, ctx=57, majf=0, minf=6913 00:14:35.748 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:14:35.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:35.748 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.748 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.748 job1: (groupid=0, jobs=1): err= 0: pid=73616: Mon Jul 22 17:18:54 2024 00:14:35.748 read: IOPS=2, BW=2414KiB/s (2472kB/s)(27.0MiB/11453msec) 00:14:35.748 slat (usec): min=500, max=3287.2k, avg=122786.57, stdev=632424.31 00:14:35.748 clat (msec): min=8136, max=11451, avg=11314.96, stdev=635.21 00:14:35.748 lat (msec): min=11424, max=11452, avg=11437.75, stdev=10.17 00:14:35.748 clat percentiles (msec): 00:14:35.748 | 1.00th=[ 8154], 5.00th=[11476], 10.00th=[11476], 20.00th=[11476], 00:14:35.748 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:14:35.748 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:14:35.748 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:14:35.748 | 99.99th=[11476] 00:14:35.748 lat (msec) : >=2000=100.00% 00:14:35.748 cpu : usr=0.00%, sys=0.13%, ctx=35, majf=0, minf=6913 00:14:35.748 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:14:35.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:35.748 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.748 latency : target=0, window=0, percentile=100.00%, depth=1024 00:14:35.748 00:14:35.748 Run status group 0 (all jobs): 00:14:35.748 READ: bw=12.6MiB/s (13.2MB/s), 359KiB/s-3221KiB/s (368kB/s-3298kB/s), io=144MiB (151MB), run=11396-11461msec 00:14:35.748 00:14:35.748 Disk stats (read/write): 00:14:35.748 sda: ios=56/0, merge=0/0, ticks=279843/0, in_queue=279843, util=99.19% 00:14:35.748 sdb: ios=31/0, merge=0/0, ticks=169553/0, in_queue=169553, util=99.15% 00:14:35.748 17:18:54 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:14:35.748 17:18:54 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:14:35.748 [global] 00:14:35.748 thread=1 00:14:35.748 invalidate=1 00:14:35.748 rw=write 00:14:35.748 time_based=1 00:14:35.748 runtime=300 00:14:35.748 ioengine=libaio 00:14:35.748 direct=1 00:14:35.748 bs=4096 00:14:35.748 iodepth=1 00:14:35.748 norandommap=0 00:14:35.748 numjobs=1 00:14:35.748 00:14:35.748 verify_dump=1 00:14:35.748 verify_backlog=512 00:14:35.748 verify_state_save=0 00:14:35.748 do_verify=1 00:14:35.748 verify=crc32c-intel 00:14:35.748 [job0] 00:14:35.748 filename=/dev/sda 00:14:35.748 [job1] 00:14:35.748 filename=/dev/sdb 00:14:35.748 queue_depth set to 113 (sda) 00:14:35.748 queue_depth set to 113 (sdb) 00:14:35.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.748 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.748 fio-3.35 00:14:35.748 Starting 2 threads 00:14:35.748 [2024-07-22 17:18:54.257388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:35.748 [2024-07-22 17:18:54.261731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:47.944 [2024-07-22 17:19:05.859224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:00.134 [2024-07-22 17:19:17.499955] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:10.103 [2024-07-22 17:19:28.928191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:22.301 [2024-07-22 17:19:39.995394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:32.268 [2024-07-22 17:19:51.170683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:44.487 [2024-07-22 17:20:02.332640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:56.715 [2024-07-22 17:20:13.440347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:06.696 [2024-07-22 17:20:24.329226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:06.696 [2024-07-22 17:20:25.332521] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:18.888 [2024-07-22 17:20:35.833703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:28.851 [2024-07-22 17:20:47.512162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.091 [2024-07-22 17:20:59.679159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.306 [2024-07-22 17:21:11.885593] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:05.523 [2024-07-22 17:21:24.019754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:17.728 [2024-07-22 17:21:35.091483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:27.697 [2024-07-22 17:21:46.234126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:39.958 [2024-07-22 17:21:57.748666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:41.364 [2024-07-22 17:22:00.023838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:51.373 [2024-07-22 17:22:09.140655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:03.577 [2024-07-22 17:22:21.202737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:15.777 [2024-07-22 17:22:33.031646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:27.986 [2024-07-22 17:22:45.144702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:40.184 [2024-07-22 17:22:57.104362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:50.153 [2024-07-22 17:23:08.228615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:02.442 [2024-07-22 17:23:19.712613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.412 [2024-07-22 17:23:31.160793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.678 [2024-07-22 17:23:36.254439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:24.239 [2024-07-22 17:23:42.561579] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:36.437 [2024-07-22 17:23:53.582646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:36.437 [2024-07-22 17:23:54.377113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:36.437 [2024-07-22 17:23:54.380628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:36.437 00:19:36.437 job0: (groupid=0, jobs=1): err= 0: pid=73771: Mon Jul 22 17:23:54 2024 00:19:36.437 read: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(3336MiB/299998msec) 00:19:36.437 slat (usec): min=2, max=434, avg= 7.00, stdev= 4.47 00:19:36.437 clat (nsec): min=1233, max=2830.8k, avg=165713.52, stdev=24075.47 00:19:36.437 lat (usec): min=105, max=2834, avg=172.71, stdev=24.27 00:19:36.437 clat percentiles (usec): 00:19:36.437 | 1.00th=[ 116], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 149], 00:19:36.437 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:19:36.437 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 206], 00:19:36.437 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 281], 99.95th=[ 314], 00:19:36.437 | 99.99th=[ 523] 00:19:36.437 write: IOPS=2848, BW=11.1MiB/s (11.7MB/s)(3338MiB/299998msec); 0 zone resets 00:19:36.437 slat (usec): min=4, max=1964, avg= 9.62, stdev= 7.96 00:19:36.437 clat (nsec): min=1196, max=3524.3k, avg=165536.94, stdev=39552.64 00:19:36.437 lat (usec): min=103, max=3557, avg=175.16, stdev=39.78 00:19:36.437 clat percentiles (usec): 00:19:36.437 | 1.00th=[ 102], 5.00th=[ 113], 10.00th=[ 124], 20.00th=[ 139], 00:19:36.437 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 169], 00:19:36.437 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 215], 95.00th=[ 229], 00:19:36.437 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 367], 00:19:36.437 | 99.99th=[ 766] 00:19:36.437 bw ( KiB/s): min= 8632, max=12561, per=50.45%, avg=11400.50, stdev=902.65, samples=599 00:19:36.437 iops : min= 2158, max= 3140, avg=2850.09, stdev=225.67, samples=599 00:19:36.437 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:19:36.437 lat (usec) : 100=0.45%, 250=98.66%, 500=0.86%, 750=0.01%, 1000=0.01% 00:19:36.437 lat (msec) : 2=0.01%, 4=0.01% 00:19:36.437 cpu : usr=2.70%, sys=4.67%, ctx=1759973, majf=0, minf=1 00:19:36.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.437 issued rwts: total=854123,854528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.437 job1: (groupid=0, jobs=1): err= 0: pid=73774: Mon Jul 22 17:23:54 2024 00:19:36.437 read: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(3282MiB/300000msec) 00:19:36.437 slat (usec): min=2, max=2751, avg= 6.32, stdev= 5.85 00:19:36.437 clat (usec): min=2, max=5613, avg=163.01, stdev=31.84 00:19:36.437 lat (usec): min=82, max=5626, avg=169.33, stdev=32.76 00:19:36.437 clat percentiles (usec): 00:19:36.437 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:19:36.437 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:19:36.437 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 206], 00:19:36.437 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 334], 99.95th=[ 392], 00:19:36.437 | 99.99th=[ 799] 00:19:36.438 write: IOPS=2801, BW=10.9MiB/s (11.5MB/s)(3283MiB/300000msec); 0 zone resets 00:19:36.438 slat (usec): min=3, max=1537, avg= 9.33, stdev= 6.77 00:19:36.438 clat (nsec): min=1333, max=3543.4k, avg=175136.14, stdev=46255.44 00:19:36.438 lat (usec): min=95, max=3550, avg=184.46, stdev=46.50 00:19:36.438 clat percentiles (usec): 00:19:36.438 | 1.00th=[ 95], 5.00th=[ 105], 10.00th=[ 119], 20.00th=[ 149], 00:19:36.438 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:19:36.438 | 70.00th=[ 188], 80.00th=[ 206], 90.00th=[ 241], 95.00th=[ 262], 00:19:36.438 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 400], 00:19:36.438 | 99.99th=[ 725] 00:19:36.438 bw ( KiB/s): min= 8192, max=13026, per=49.60%, avg=11209.71, stdev=1024.30, samples=599 00:19:36.438 iops : min= 2048, max= 3256, avg=2802.38, stdev=256.09, samples=599 00:19:36.438 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:19:36.438 lat (usec) : 100=1.36%, 250=93.89%, 500=4.70%, 750=0.02%, 1000=0.01% 00:19:36.438 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:19:36.438 cpu : usr=2.65%, sys=4.57%, ctx=1734177, majf=0, minf=2 00:19:36.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.438 issued rwts: total=840192,840347,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:36.438 00:19:36.438 Run status group 0 (all jobs): 00:19:36.438 READ: bw=22.1MiB/s (23.1MB/s), 10.9MiB/s-11.1MiB/s (11.5MB/s-11.7MB/s), io=6618MiB (6940MB), run=299998-300000msec 00:19:36.438 WRITE: bw=22.1MiB/s (23.1MB/s), 10.9MiB/s-11.1MiB/s (11.5MB/s-11.7MB/s), io=6621MiB (6942MB), run=299998-300000msec 00:19:36.438 00:19:36.438 Disk stats (read/write): 00:19:36.438 sda: ios=855232/854016, merge=0/0, ticks=136253/139899, in_queue=276152, util=100.00% 00:19:36.438 sdb: ios=839876/840138, merge=0/0, ticks=129075/145060, in_queue=274135, util=100.00% 00:19:36.438 17:23:54 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=77039 00:19:36.438 17:23:54 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:19:36.438 17:23:54 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:19:36.438 [global] 00:19:36.438 thread=1 00:19:36.438 invalidate=1 00:19:36.438 rw=rw 00:19:36.438 time_based=1 00:19:36.438 runtime=10 00:19:36.438 ioengine=libaio 00:19:36.438 direct=1 00:19:36.438 bs=1048576 00:19:36.438 iodepth=128 00:19:36.438 norandommap=1 00:19:36.438 numjobs=1 00:19:36.438 00:19:36.438 [job0] 00:19:36.438 filename=/dev/sda 00:19:36.438 [job1] 00:19:36.438 filename=/dev/sdb 00:19:36.438 queue_depth set to 113 (sda) 00:19:36.438 queue_depth set to 113 (sdb) 00:19:36.438 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:36.438 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:36.438 fio-3.35 00:19:36.438 Starting 2 threads 00:19:36.438 [2024-07-22 17:23:54.568282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:36.438 [2024-07-22 17:23:54.572337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:38.971 17:23:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:38.971 [2024-07-22 17:23:57.686380] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:19:38.971 [2024-07-22 17:23:57.686860] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.687017] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.687123] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.687225] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.687314] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.690459] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.707103] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 [2024-07-22 17:23:57.708268] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b78 00:19:38.971 17:23:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:19:38.971 17:23:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:39.229 17:23:58 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:19:39.229 17:23:58 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:39.229 fio: io_u error on file /dev/sda: Input/output error: write offset=99614720, buflen=1048576 00:19:39.229 fio: io_u error on file /dev/sda: Input/output error: write offset=100663296, buflen=1048576 00:19:39.798 17:23:58 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:39.798 fio: pid=77068, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=101711872, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=71303168, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=72351744, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=102760448, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=103809024, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=73400320, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=104857600, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=105906176, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=106954752, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=108003328, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=109051904, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=74448896, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=75497472, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=76546048, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=77594624, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=78643200, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=110100480, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=79691776, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=80740352, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=111149056, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=112197632, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=81788928, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=82837504, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=83886080, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=84934656, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=85983232, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=87031808, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=88080384, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=89128960, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=90177536, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=91226112, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=92274688, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=93323264, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=94371840, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=95420416, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: read offset=96468992, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:19:39.798 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=97517568, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=98566144, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:19:39.799 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:19:40.058 [2024-07-22 17:23:58.802712] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:19:40.058 [2024-07-22 17:23:58.804825] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:40.058 [2024-07-22 17:23:58.806586] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:40.058 [2024-07-22 17:23:58.807797] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:40.058 [2024-07-22 17:23:58.809164] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.373029] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.374501] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.375687] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.377091] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.378170] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.379537] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.380625] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.382047] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.383136] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.384432] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c45 00:19:42.590 [2024-07-22 17:24:01.385432] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.385559] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.385647] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.385731] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.385815] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.385905] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.386019] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.394171] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.394276] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:19:42.590 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 77039 00:19:42.590 [2024-07-22 17:24:01.400344] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.402077] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.403692] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.404929] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.406354] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.407782] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.409049] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c46 00:19:42.590 [2024-07-22 17:24:01.410398] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.411887] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.413085] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.414135] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.415616] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.416988] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.418374] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.419772] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.421141] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.422325] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.423575] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.424904] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.590 [2024-07-22 17:24:01.426569] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.591 [2024-07-22 17:24:01.427884] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.591 [2024-07-22 17:24:01.429322] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.591 [2024-07-22 17:24:01.430399] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c47 00:19:42.591 [2024-07-22 17:24:01.431461] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.431580] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.434194] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.435625] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.436891] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.438307] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.439479] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.440845] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.442204] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.443565] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.444993] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.446090] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.447346] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.448419] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.449739] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 [2024-07-22 17:24:01.450909] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c48 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=655360000, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=656408576, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=657457152, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=658505728, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=659554304, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=660602880, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=661651456, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=662700032, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=663748608, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=664797184, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=665845760, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=666894336, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=667942912, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=668991488, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=670040064, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=671088640, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=651165696, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=652214272, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=653262848, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=654311424, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=672137216, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=617611264, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=673185792, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=674234368, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=675282944, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=676331520, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=677380096, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=618659840, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=619708416, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=678428672, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=679477248, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=620756992, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=680525824, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=621805568, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=681574400, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=682622976, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=683671552, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=622854144, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=684720128, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=623902720, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=685768704, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=686817280, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=687865856, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=688914432, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=689963008, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=624951296, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=625999872, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=691011584, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=692060160, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=693108736, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=627048448, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=694157312, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=628097024, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=695205888, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=696254464, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=697303040, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=629145600, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=630194176, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=631242752, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=632291328, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=698351616, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=699400192, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=700448768, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=633339904, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=634388480, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=635437056, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=701497344, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=702545920, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=636485632, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=703594496, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=637534208, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=704643072, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=638582784, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=639631360, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=705691648, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=640679936, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=706740224, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=707788800, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=641728512, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=708837376, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=642777088, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=643825664, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=709885952, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=710934528, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=711983104, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=644874240, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=645922816, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=646971392, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=648019968, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=649068544, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=713031680, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=650117120, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=714080256, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=651165696, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=715128832, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=716177408, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=717225984, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: read offset=652214272, buflen=1048576 00:19:42.591 fio: io_u error on file /dev/sdb: Input/output error: write offset=718274560, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=653262848, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=654311424, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=655360000, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=719323136, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=720371712, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=656408576, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=657457152, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=658505728, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=721420288, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=659554304, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=660602880, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=722468864, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=723517440, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=661651456, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=662700032, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=663748608, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=724566016, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=664797184, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=665845760, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=666894336, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=725614592, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=726663168, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=667942912, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=668991488, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=727711744, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=728760320, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=670040064, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: write offset=729808896, buflen=1048576 00:19:42.592 fio: io_u error on file /dev/sdb: Input/output error: read offset=671088640, buflen=1048576 00:19:42.592 fio: pid=77071, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:42.592 00:19:42.592 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=77068: Mon Jul 22 17:24:01 2024 00:19:42.592 read: IOPS=99, BW=83.2MiB/s (87.3MB/s)(324MiB/3892msec) 00:19:42.592 slat (usec): min=26, max=80148, avg=3725.12, stdev=8174.83 00:19:42.592 clat (msec): min=389, max=708, avg=500.87, stdev=55.49 00:19:42.592 lat (msec): min=389, max=709, avg=504.26, stdev=56.15 00:19:42.592 clat percentiles (msec): 00:19:42.592 | 1.00th=[ 405], 5.00th=[ 418], 10.00th=[ 426], 20.00th=[ 456], 00:19:42.592 | 30.00th=[ 468], 40.00th=[ 477], 50.00th=[ 498], 60.00th=[ 510], 00:19:42.592 | 70.00th=[ 535], 80.00th=[ 558], 90.00th=[ 575], 95.00th=[ 584], 00:19:42.592 | 99.00th=[ 609], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 709], 00:19:42.592 | 99.99th=[ 709] 00:19:42.592 bw ( KiB/s): min=30720, max=143360, per=67.74%, avg=94752.14, stdev=47139.60, samples=7 00:19:42.592 iops : min= 30, max= 140, avg=92.43, stdev=45.91, samples=7 00:19:42.592 write: IOPS=106, BW=90.2MiB/s (94.6MB/s)(351MiB/3892msec); 0 zone resets 00:19:42.592 slat (usec): min=41, max=218042, avg=4594.17, stdev=12548.46 00:19:42.592 clat (msec): min=446, max=811, avg=562.02, stdev=63.27 00:19:42.592 lat (msec): min=446, max=811, avg=566.43, stdev=63.60 00:19:42.592 clat percentiles (msec): 00:19:42.592 | 1.00th=[ 456], 5.00th=[ 472], 10.00th=[ 489], 20.00th=[ 510], 00:19:42.592 | 30.00th=[ 527], 40.00th=[ 542], 50.00th=[ 558], 60.00th=[ 575], 00:19:42.592 | 70.00th=[ 592], 80.00th=[ 600], 90.00th=[ 634], 95.00th=[ 667], 00:19:42.592 | 99.00th=[ 776], 99.50th=[ 785], 99.90th=[ 810], 99.95th=[ 810], 00:19:42.592 | 99.99th=[ 810] 00:19:42.592 bw ( KiB/s): min=22528, max=145408, per=68.94%, avg=102659.86, stdev=47440.85, samples=7 00:19:42.592 iops : min= 22, max= 142, avg=100.14, stdev=46.30, samples=7 00:19:42.592 lat (msec) : 500=27.77%, 750=55.42%, 1000=0.87% 00:19:42.592 cpu : usr=0.85%, sys=1.49%, ctx=331, majf=0, minf=1 00:19:42.592 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:19:42.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.592 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.592 issued rwts: total=389,414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.592 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=77071: Mon Jul 22 17:24:01 2024 00:19:42.592 read: IOPS=95, BW=88.1MiB/s (92.4MB/s)(589MiB/6684msec) 00:19:42.592 slat (usec): min=31, max=2604.1k, avg=7129.22, stdev=103382.23 00:19:42.592 clat (msec): min=152, max=2787, avg=503.65, stdev=503.67 00:19:42.592 lat (msec): min=152, max=2787, avg=506.90, stdev=503.26 00:19:42.592 clat percentiles (msec): 00:19:42.592 | 1.00th=[ 169], 5.00th=[ 199], 10.00th=[ 243], 20.00th=[ 359], 00:19:42.592 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 405], 60.00th=[ 426], 00:19:42.592 | 70.00th=[ 456], 80.00th=[ 485], 90.00th=[ 518], 95.00th=[ 542], 00:19:42.592 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:19:42.592 | 99.99th=[ 2802] 00:19:42.592 bw ( KiB/s): min=59392, max=194560, per=91.41%, avg=127852.11, stdev=42717.36, samples=9 00:19:42.592 iops : min= 58, max= 190, avg=124.78, stdev=41.66, samples=9 00:19:42.592 write: IOPS=104, BW=92.9MiB/s (97.4MB/s)(621MiB/6684msec); 0 zone resets 00:19:42.592 slat (usec): min=54, max=32846, avg=3020.10, stdev=5786.74 00:19:42.592 clat (msec): min=202, max=2869, avg=562.10, stdev=489.62 00:19:42.592 lat (msec): min=203, max=2869, avg=565.36, stdev=489.72 00:19:42.592 clat percentiles (msec): 00:19:42.592 | 1.00th=[ 218], 5.00th=[ 234], 10.00th=[ 338], 20.00th=[ 405], 00:19:42.592 | 30.00th=[ 426], 40.00th=[ 443], 50.00th=[ 472], 60.00th=[ 502], 00:19:42.592 | 70.00th=[ 531], 80.00th=[ 550], 90.00th=[ 584], 95.00th=[ 609], 00:19:42.592 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:19:42.592 | 99.99th=[ 2869] 00:19:42.592 bw ( KiB/s): min= 8192, max=210944, per=90.75%, avg=135138.00, stdev=60416.07, samples=9 00:19:42.592 iops : min= 8, max= 206, avg=131.89, stdev=59.00, samples=9 00:19:42.592 lat (msec) : 250=7.55%, 500=56.20%, 750=22.65%, >=2000=4.04% 00:19:42.592 cpu : usr=0.94%, sys=1.23%, ctx=365, majf=0, minf=1 00:19:42.592 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:19:42.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.592 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.592 issued rwts: total=641,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.592 00:19:42.592 Run status group 0 (all jobs): 00:19:42.592 READ: bw=137MiB/s (143MB/s), 83.2MiB/s-88.1MiB/s (87.3MB/s-92.4MB/s), io=913MiB (957MB), run=3892-6684msec 00:19:42.592 WRITE: bw=145MiB/s (152MB/s), 90.2MiB/s-92.9MiB/s (94.6MB/s-97.4MB/s), io=972MiB (1019MB), run=3892-6684msec 00:19:42.592 00:19:42.592 Disk stats (read/write): 00:19:42.592 sda: ios=426/410, merge=0/0, ticks=81437/109189, in_queue=190626, util=86.27% 00:19:42.592 sdb: ios=637/638, merge=0/0, ticks=87215/130052, in_queue=217267, util=93.23% 00:19:42.592 iscsi hotplug test: fio failed as expected 00:19:42.592 Cleaning up iSCSI connection 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:42.592 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:42.850 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:42.850 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:42.850 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:42.850 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:19:42.850 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:19:43.108 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:19:43.108 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 73249 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 73249 ']' 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 73249 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73249 00:19:43.109 killing process with pid 73249 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73249' 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 73249 00:19:43.109 17:24:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 73249 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:45.664 00:19:45.664 real 5m32.435s 00:19:45.664 user 3m44.772s 00:19:45.664 sys 1m49.650s 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.664 ************************************ 00:19:45.664 END TEST iscsi_tgt_fio 00:19:45.664 ************************************ 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:19:45.664 17:24:04 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:45.664 17:24:04 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:19:45.664 17:24:04 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.664 17:24:04 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.664 17:24:04 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:45.664 ************************************ 00:19:45.664 START TEST iscsi_tgt_qos 00:19:45.664 ************************************ 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:19:45.664 * Looking for test storage... 00:19:45.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:45.664 Process pid: 77271 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=77271 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 77271' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 77271 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 77271 ']' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.664 17:24:04 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:45.664 [2024-07-22 17:24:04.492779] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:45.664 [2024-07-22 17:24:04.492976] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77271 ] 00:19:45.923 [2024-07-22 17:24:04.660444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.181 [2024-07-22 17:24:04.919262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:47.172 iscsi_tgt is listening. Running tests... 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:47.172 Malloc0 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.172 17:24:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:48.108 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:48.108 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:48.108 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:48.108 [2024-07-22 17:24:06.944508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:48.108 "tick_rate": 2200000000, 00:19:48.108 "ticks": 2451057487161, 00:19:48.108 "bdevs": [ 00:19:48.108 { 00:19:48.108 "name": "Malloc0", 00:19:48.108 "bytes_read": 37376, 00:19:48.108 "num_read_ops": 3, 00:19:48.108 "bytes_written": 0, 00:19:48.108 "num_write_ops": 0, 00:19:48.108 "bytes_unmapped": 0, 00:19:48.108 "num_unmap_ops": 0, 00:19:48.108 "bytes_copied": 0, 00:19:48.108 "num_copy_ops": 0, 00:19:48.108 "read_latency_ticks": 1985484, 00:19:48.108 "max_read_latency_ticks": 774122, 00:19:48.108 "min_read_latency_ticks": 569209, 00:19:48.108 "write_latency_ticks": 0, 00:19:48.108 "max_write_latency_ticks": 0, 00:19:48.108 "min_write_latency_ticks": 0, 00:19:48.108 "unmap_latency_ticks": 0, 00:19:48.108 "max_unmap_latency_ticks": 0, 00:19:48.108 "min_unmap_latency_ticks": 0, 00:19:48.108 "copy_latency_ticks": 0, 00:19:48.108 "max_copy_latency_ticks": 0, 00:19:48.108 "min_copy_latency_ticks": 0, 00:19:48.108 "io_error": {} 00:19:48.108 } 00:19:48.108 ] 00:19:48.108 }' 00:19:48.108 17:24:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:48.108 17:24:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=3 00:19:48.108 17:24:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:48.367 17:24:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=37376 00:19:48.367 17:24:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:48.367 [global] 00:19:48.367 thread=1 00:19:48.367 invalidate=1 00:19:48.367 rw=randread 00:19:48.367 time_based=1 00:19:48.367 runtime=5 00:19:48.367 ioengine=libaio 00:19:48.367 direct=1 00:19:48.367 bs=1024 00:19:48.367 iodepth=128 00:19:48.367 norandommap=1 00:19:48.367 numjobs=1 00:19:48.367 00:19:48.367 [job0] 00:19:48.367 filename=/dev/sda 00:19:48.367 queue_depth set to 113 (sda) 00:19:48.367 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:48.367 fio-3.35 00:19:48.367 Starting 1 thread 00:19:53.632 00:19:53.632 job0: (groupid=0, jobs=1): err= 0: pid=77362: Mon Jul 22 17:24:12 2024 00:19:53.632 read: IOPS=33.0k, BW=32.3MiB/s (33.8MB/s)(161MiB/5004msec) 00:19:53.632 slat (nsec): min=1948, max=991099, avg=28243.75, stdev=89182.90 00:19:53.632 clat (usec): min=1108, max=6927, avg=3845.79, stdev=181.24 00:19:53.632 lat (usec): min=1114, max=6930, avg=3874.03, stdev=159.45 00:19:53.632 clat percentiles (usec): 00:19:53.632 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3654], 20.00th=[ 3720], 00:19:53.632 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:19:53.632 | 70.00th=[ 3949], 80.00th=[ 3982], 90.00th=[ 4047], 95.00th=[ 4113], 00:19:53.632 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4424], 99.95th=[ 4490], 00:19:53.632 | 99.99th=[ 6456] 00:19:53.632 bw ( KiB/s): min=32300, max=34075, per=99.83%, avg=32969.22, stdev=698.61, samples=9 00:19:53.632 iops : min=32300, max=34075, avg=32969.22, stdev=698.61, samples=9 00:19:53.633 lat (msec) : 2=0.03%, 4=81.23%, 10=18.74% 00:19:53.633 cpu : usr=7.56%, sys=14.39%, ctx=90221, majf=0, minf=32 00:19:53.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:53.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.633 issued rwts: total=165260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.633 00:19:53.633 Run status group 0 (all jobs): 00:19:53.633 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=161MiB (169MB), run=5004-5004msec 00:19:53.633 00:19:53.633 Disk stats (read/write): 00:19:53.633 sda: ios=161546/0, merge=0/0, ticks=534996/0, in_queue=534996, util=98.16% 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:53.633 "tick_rate": 2200000000, 00:19:53.633 "ticks": 2463063019356, 00:19:53.633 "bdevs": [ 00:19:53.633 { 00:19:53.633 "name": "Malloc0", 00:19:53.633 "bytes_read": 170336768, 00:19:53.633 "num_read_ops": 165317, 00:19:53.633 "bytes_written": 0, 00:19:53.633 "num_write_ops": 0, 00:19:53.633 "bytes_unmapped": 0, 00:19:53.633 "num_unmap_ops": 0, 00:19:53.633 "bytes_copied": 0, 00:19:53.633 "num_copy_ops": 0, 00:19:53.633 "read_latency_ticks": 54888152581, 00:19:53.633 "max_read_latency_ticks": 812179, 00:19:53.633 "min_read_latency_ticks": 18672, 00:19:53.633 "write_latency_ticks": 0, 00:19:53.633 "max_write_latency_ticks": 0, 00:19:53.633 "min_write_latency_ticks": 0, 00:19:53.633 "unmap_latency_ticks": 0, 00:19:53.633 "max_unmap_latency_ticks": 0, 00:19:53.633 "min_unmap_latency_ticks": 0, 00:19:53.633 "copy_latency_ticks": 0, 00:19:53.633 "max_copy_latency_ticks": 0, 00:19:53.633 "min_copy_latency_ticks": 0, 00:19:53.633 "io_error": {} 00:19:53.633 } 00:19:53.633 ] 00:19:53.633 }' 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=165317 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=170336768 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=33062 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=34059878 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=16531 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=17029939 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=8514969 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=16000 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=16 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=16777216 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=8 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=8388608 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 16000 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:53.633 "tick_rate": 2200000000, 00:19:53.633 "ticks": 2463362158982, 00:19:53.633 "bdevs": [ 00:19:53.633 { 00:19:53.633 "name": "Malloc0", 00:19:53.633 "bytes_read": 170336768, 00:19:53.633 "num_read_ops": 165317, 00:19:53.633 "bytes_written": 0, 00:19:53.633 "num_write_ops": 0, 00:19:53.633 "bytes_unmapped": 0, 00:19:53.633 "num_unmap_ops": 0, 00:19:53.633 "bytes_copied": 0, 00:19:53.633 "num_copy_ops": 0, 00:19:53.633 "read_latency_ticks": 54888152581, 00:19:53.633 "max_read_latency_ticks": 812179, 00:19:53.633 "min_read_latency_ticks": 18672, 00:19:53.633 "write_latency_ticks": 0, 00:19:53.633 "max_write_latency_ticks": 0, 00:19:53.633 "min_write_latency_ticks": 0, 00:19:53.633 "unmap_latency_ticks": 0, 00:19:53.633 "max_unmap_latency_ticks": 0, 00:19:53.633 "min_unmap_latency_ticks": 0, 00:19:53.633 "copy_latency_ticks": 0, 00:19:53.633 "max_copy_latency_ticks": 0, 00:19:53.633 "min_copy_latency_ticks": 0, 00:19:53.633 "io_error": {} 00:19:53.633 } 00:19:53.633 ] 00:19:53.633 }' 00:19:53.633 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:53.891 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=165317 00:19:53.891 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:53.891 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=170336768 00:19:53.891 17:24:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:53.891 [global] 00:19:53.891 thread=1 00:19:53.891 invalidate=1 00:19:53.891 rw=randread 00:19:53.891 time_based=1 00:19:53.891 runtime=5 00:19:53.891 ioengine=libaio 00:19:53.891 direct=1 00:19:53.891 bs=1024 00:19:53.891 iodepth=128 00:19:53.891 norandommap=1 00:19:53.891 numjobs=1 00:19:53.891 00:19:53.891 [job0] 00:19:53.891 filename=/dev/sda 00:19:53.891 queue_depth set to 113 (sda) 00:19:53.891 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:53.891 fio-3.35 00:19:53.891 Starting 1 thread 00:19:59.179 00:19:59.179 job0: (groupid=0, jobs=1): err= 0: pid=77446: Mon Jul 22 17:24:17 2024 00:19:59.179 read: IOPS=16.0k, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5008msec) 00:19:59.179 slat (nsec): min=1924, max=1420.9k, avg=59566.48, stdev=203959.52 00:19:59.179 clat (usec): min=1162, max=14783, avg=7938.17, stdev=385.75 00:19:59.179 lat (usec): min=1169, max=14794, avg=7997.73, stdev=330.97 00:19:59.179 clat percentiles (usec): 00:19:59.179 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7701], 00:19:59.179 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8029], 60.00th=[ 8029], 00:19:59.179 | 70.00th=[ 8094], 80.00th=[ 8094], 90.00th=[ 8225], 95.00th=[ 8291], 00:19:59.179 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9765], 99.95th=[12649], 00:19:59.179 | 99.99th=[14746] 00:19:59.179 bw ( KiB/s): min=15810, max=16032, per=100.00%, avg=15997.40, stdev=67.63, samples=10 00:19:59.179 iops : min=15810, max=16032, avg=15997.40, stdev=67.63, samples=10 00:19:59.179 lat (msec) : 2=0.02%, 4=0.04%, 10=99.84%, 20=0.10% 00:19:59.179 cpu : usr=5.11%, sys=10.07%, ctx=44916, majf=0, minf=32 00:19:59.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:59.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.179 issued rwts: total=80114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.179 00:19:59.179 Run status group 0 (all jobs): 00:19:59.179 READ: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=78.2MiB (82.0MB), run=5008-5008msec 00:19:59.179 00:19:59.179 Disk stats (read/write): 00:19:59.179 sda: ios=78256/0, merge=0/0, ticks=538652/0, in_queue=538652, util=98.11% 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:59.179 "tick_rate": 2200000000, 00:19:59.179 "ticks": 2475313324302, 00:19:59.179 "bdevs": [ 00:19:59.179 { 00:19:59.179 "name": "Malloc0", 00:19:59.179 "bytes_read": 252373504, 00:19:59.179 "num_read_ops": 245431, 00:19:59.179 "bytes_written": 0, 00:19:59.179 "num_write_ops": 0, 00:19:59.179 "bytes_unmapped": 0, 00:19:59.179 "num_unmap_ops": 0, 00:19:59.179 "bytes_copied": 0, 00:19:59.179 "num_copy_ops": 0, 00:19:59.179 "read_latency_ticks": 650573933460, 00:19:59.179 "max_read_latency_ticks": 8694199, 00:19:59.179 "min_read_latency_ticks": 18672, 00:19:59.179 "write_latency_ticks": 0, 00:19:59.179 "max_write_latency_ticks": 0, 00:19:59.179 "min_write_latency_ticks": 0, 00:19:59.179 "unmap_latency_ticks": 0, 00:19:59.179 "max_unmap_latency_ticks": 0, 00:19:59.179 "min_unmap_latency_ticks": 0, 00:19:59.179 "copy_latency_ticks": 0, 00:19:59.179 "max_copy_latency_ticks": 0, 00:19:59.179 "min_copy_latency_ticks": 0, 00:19:59.179 "io_error": {} 00:19:59.179 } 00:19:59.179 ] 00:19:59.179 }' 00:19:59.179 17:24:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=245431 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=252373504 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=16022 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=16407347 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 16022 16000 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=16022 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=16000 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:59.179 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.180 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:59.180 17:24:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.180 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:59.180 "tick_rate": 2200000000, 00:19:59.180 "ticks": 2475621432574, 00:19:59.180 "bdevs": [ 00:19:59.180 { 00:19:59.180 "name": "Malloc0", 00:19:59.180 "bytes_read": 252373504, 00:19:59.180 "num_read_ops": 245431, 00:19:59.180 "bytes_written": 0, 00:19:59.180 "num_write_ops": 0, 00:19:59.180 "bytes_unmapped": 0, 00:19:59.180 "num_unmap_ops": 0, 00:19:59.180 "bytes_copied": 0, 00:19:59.180 "num_copy_ops": 0, 00:19:59.180 "read_latency_ticks": 650573933460, 00:19:59.180 "max_read_latency_ticks": 8694199, 00:19:59.180 "min_read_latency_ticks": 18672, 00:19:59.180 "write_latency_ticks": 0, 00:19:59.180 "max_write_latency_ticks": 0, 00:19:59.180 "min_write_latency_ticks": 0, 00:19:59.180 "unmap_latency_ticks": 0, 00:19:59.180 "max_unmap_latency_ticks": 0, 00:19:59.180 "min_unmap_latency_ticks": 0, 00:19:59.180 "copy_latency_ticks": 0, 00:19:59.180 "max_copy_latency_ticks": 0, 00:19:59.180 "min_copy_latency_ticks": 0, 00:19:59.180 "io_error": {} 00:19:59.180 } 00:19:59.180 ] 00:19:59.180 }' 00:19:59.180 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.437 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=245431 00:19:59.437 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:59.437 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=252373504 00:19:59.437 17:24:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:59.437 [global] 00:19:59.437 thread=1 00:19:59.437 invalidate=1 00:19:59.437 rw=randread 00:19:59.437 time_based=1 00:19:59.437 runtime=5 00:19:59.437 ioengine=libaio 00:19:59.437 direct=1 00:19:59.437 bs=1024 00:19:59.437 iodepth=128 00:19:59.437 norandommap=1 00:19:59.437 numjobs=1 00:19:59.437 00:19:59.437 [job0] 00:19:59.437 filename=/dev/sda 00:19:59.437 queue_depth set to 113 (sda) 00:19:59.695 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:59.695 fio-3.35 00:19:59.695 Starting 1 thread 00:20:04.957 00:20:04.957 job0: (groupid=0, jobs=1): err= 0: pid=77536: Mon Jul 22 17:24:23 2024 00:20:04.957 read: IOPS=33.1k, BW=32.3MiB/s (33.8MB/s)(161MiB/5003msec) 00:20:04.957 slat (nsec): min=1841, max=986195, avg=28253.52, stdev=89252.84 00:20:04.957 clat (usec): min=953, max=6659, avg=3843.38, stdev=159.48 00:20:04.957 lat (usec): min=960, max=6662, avg=3871.64, stdev=133.62 00:20:04.957 clat percentiles (usec): 00:20:04.957 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 3752], 00:20:04.957 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:20:04.957 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4080], 00:20:04.957 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 5145], 00:20:04.957 | 99.99th=[ 6128] 00:20:04.957 bw ( KiB/s): min=32640, max=33440, per=100.00%, avg=33074.67, stdev=287.59, samples=9 00:20:04.957 iops : min=32640, max=33440, avg=33074.67, stdev=287.59, samples=9 00:20:04.957 lat (usec) : 1000=0.01% 00:20:04.957 lat (msec) : 2=0.02%, 4=88.24%, 10=11.73% 00:20:04.957 cpu : usr=7.14%, sys=14.61%, ctx=94103, majf=0, minf=32 00:20:04.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:04.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.957 issued rwts: total=165364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.957 00:20:04.957 Run status group 0 (all jobs): 00:20:04.957 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=161MiB (169MB), run=5003-5003msec 00:20:04.957 00:20:04.957 Disk stats (read/write): 00:20:04.957 sda: ios=161618/0, merge=0/0, ticks=534816/0, in_queue=534816, util=98.11% 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:04.957 "tick_rate": 2200000000, 00:20:04.957 "ticks": 2487604039989, 00:20:04.957 "bdevs": [ 00:20:04.957 { 00:20:04.957 "name": "Malloc0", 00:20:04.957 "bytes_read": 421706240, 00:20:04.957 "num_read_ops": 410795, 00:20:04.957 "bytes_written": 0, 00:20:04.957 "num_write_ops": 0, 00:20:04.957 "bytes_unmapped": 0, 00:20:04.957 "num_unmap_ops": 0, 00:20:04.957 "bytes_copied": 0, 00:20:04.957 "num_copy_ops": 0, 00:20:04.957 "read_latency_ticks": 705870418345, 00:20:04.957 "max_read_latency_ticks": 8694199, 00:20:04.957 "min_read_latency_ticks": 18672, 00:20:04.957 "write_latency_ticks": 0, 00:20:04.957 "max_write_latency_ticks": 0, 00:20:04.957 "min_write_latency_ticks": 0, 00:20:04.957 "unmap_latency_ticks": 0, 00:20:04.957 "max_unmap_latency_ticks": 0, 00:20:04.957 "min_unmap_latency_ticks": 0, 00:20:04.957 "copy_latency_ticks": 0, 00:20:04.957 "max_copy_latency_ticks": 0, 00:20:04.957 "min_copy_latency_ticks": 0, 00:20:04.957 "io_error": {} 00:20:04.957 } 00:20:04.957 ] 00:20:04.957 }' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=410795 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=421706240 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=33072 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=33866547 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 33072 -gt 16000 ']' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 16000 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:04.957 "tick_rate": 2200000000, 00:20:04.957 "ticks": 2487885670223, 00:20:04.957 "bdevs": [ 00:20:04.957 { 00:20:04.957 "name": "Malloc0", 00:20:04.957 "bytes_read": 421706240, 00:20:04.957 "num_read_ops": 410795, 00:20:04.957 "bytes_written": 0, 00:20:04.957 "num_write_ops": 0, 00:20:04.957 "bytes_unmapped": 0, 00:20:04.957 "num_unmap_ops": 0, 00:20:04.957 "bytes_copied": 0, 00:20:04.957 "num_copy_ops": 0, 00:20:04.957 "read_latency_ticks": 705870418345, 00:20:04.957 "max_read_latency_ticks": 8694199, 00:20:04.957 "min_read_latency_ticks": 18672, 00:20:04.957 "write_latency_ticks": 0, 00:20:04.957 "max_write_latency_ticks": 0, 00:20:04.957 "min_write_latency_ticks": 0, 00:20:04.957 "unmap_latency_ticks": 0, 00:20:04.957 "max_unmap_latency_ticks": 0, 00:20:04.957 "min_unmap_latency_ticks": 0, 00:20:04.957 "copy_latency_ticks": 0, 00:20:04.957 "max_copy_latency_ticks": 0, 00:20:04.957 "min_copy_latency_ticks": 0, 00:20:04.957 "io_error": {} 00:20:04.957 } 00:20:04.957 ] 00:20:04.957 }' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=410795 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=421706240 00:20:04.957 17:24:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:04.957 [global] 00:20:04.957 thread=1 00:20:04.957 invalidate=1 00:20:04.957 rw=randread 00:20:04.957 time_based=1 00:20:04.957 runtime=5 00:20:04.957 ioengine=libaio 00:20:04.957 direct=1 00:20:04.957 bs=1024 00:20:04.957 iodepth=128 00:20:04.957 norandommap=1 00:20:04.957 numjobs=1 00:20:04.957 00:20:04.957 [job0] 00:20:04.957 filename=/dev/sda 00:20:04.957 queue_depth set to 113 (sda) 00:20:05.215 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:05.215 fio-3.35 00:20:05.215 Starting 1 thread 00:20:10.478 00:20:10.478 job0: (groupid=0, jobs=1): err= 0: pid=77622: Mon Jul 22 17:24:29 2024 00:20:10.478 read: IOPS=16.0k, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5008msec) 00:20:10.478 slat (nsec): min=1846, max=1472.8k, avg=59969.28, stdev=206948.51 00:20:10.478 clat (usec): min=2822, max=14656, avg=7938.57, stdev=333.50 00:20:10.478 lat (usec): min=2838, max=14664, avg=7998.54, stdev=263.64 00:20:10.478 clat percentiles (usec): 00:20:10.478 | 1.00th=[ 7046], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 7963], 00:20:10.478 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8029], 60.00th=[ 8029], 00:20:10.478 | 70.00th=[ 8029], 80.00th=[ 8029], 90.00th=[ 8094], 95.00th=[ 8094], 00:20:10.478 | 99.00th=[ 8225], 99.50th=[ 8291], 99.90th=[ 9634], 99.95th=[12518], 00:20:10.478 | 99.99th=[14615] 00:20:10.478 bw ( KiB/s): min=15810, max=16032, per=100.00%, avg=15994.20, stdev=67.76, samples=10 00:20:10.478 iops : min=15810, max=16032, avg=15994.20, stdev=67.76, samples=10 00:20:10.478 lat (msec) : 4=0.07%, 10=99.84%, 20=0.10% 00:20:10.478 cpu : usr=4.35%, sys=8.79%, ctx=46894, majf=0, minf=32 00:20:10.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:10.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:10.478 issued rwts: total=80098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:10.478 00:20:10.478 Run status group 0 (all jobs): 00:20:10.478 READ: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=78.2MiB (82.0MB), run=5008-5008msec 00:20:10.478 00:20:10.478 Disk stats (read/write): 00:20:10.478 sda: ios=78240/0, merge=0/0, ticks=531168/0, in_queue=531168, util=98.11% 00:20:10.478 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:10.478 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.478 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:10.478 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.478 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:10.478 "tick_rate": 2200000000, 00:20:10.478 "ticks": 2499840675477, 00:20:10.478 "bdevs": [ 00:20:10.478 { 00:20:10.478 "name": "Malloc0", 00:20:10.478 "bytes_read": 503726592, 00:20:10.478 "num_read_ops": 490893, 00:20:10.478 "bytes_written": 0, 00:20:10.478 "num_write_ops": 0, 00:20:10.478 "bytes_unmapped": 0, 00:20:10.479 "num_unmap_ops": 0, 00:20:10.479 "bytes_copied": 0, 00:20:10.479 "num_copy_ops": 0, 00:20:10.479 "read_latency_ticks": 1322067501295, 00:20:10.479 "max_read_latency_ticks": 8694199, 00:20:10.479 "min_read_latency_ticks": 18672, 00:20:10.479 "write_latency_ticks": 0, 00:20:10.479 "max_write_latency_ticks": 0, 00:20:10.479 "min_write_latency_ticks": 0, 00:20:10.479 "unmap_latency_ticks": 0, 00:20:10.479 "max_unmap_latency_ticks": 0, 00:20:10.479 "min_unmap_latency_ticks": 0, 00:20:10.479 "copy_latency_ticks": 0, 00:20:10.479 "max_copy_latency_ticks": 0, 00:20:10.479 "min_copy_latency_ticks": 0, 00:20:10.479 "io_error": {} 00:20:10.479 } 00:20:10.479 ] 00:20:10.479 }' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=490893 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=503726592 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=16019 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=16404070 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 16019 16000 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=16019 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=16000 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:10.479 I/O rate limiting tests successful 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 16 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:10.479 "tick_rate": 2200000000, 00:20:10.479 "ticks": 2500148767465, 00:20:10.479 "bdevs": [ 00:20:10.479 { 00:20:10.479 "name": "Malloc0", 00:20:10.479 "bytes_read": 503726592, 00:20:10.479 "num_read_ops": 490893, 00:20:10.479 "bytes_written": 0, 00:20:10.479 "num_write_ops": 0, 00:20:10.479 "bytes_unmapped": 0, 00:20:10.479 "num_unmap_ops": 0, 00:20:10.479 "bytes_copied": 0, 00:20:10.479 "num_copy_ops": 0, 00:20:10.479 "read_latency_ticks": 1322067501295, 00:20:10.479 "max_read_latency_ticks": 8694199, 00:20:10.479 "min_read_latency_ticks": 18672, 00:20:10.479 "write_latency_ticks": 0, 00:20:10.479 "max_write_latency_ticks": 0, 00:20:10.479 "min_write_latency_ticks": 0, 00:20:10.479 "unmap_latency_ticks": 0, 00:20:10.479 "max_unmap_latency_ticks": 0, 00:20:10.479 "min_unmap_latency_ticks": 0, 00:20:10.479 "copy_latency_ticks": 0, 00:20:10.479 "max_copy_latency_ticks": 0, 00:20:10.479 "min_copy_latency_ticks": 0, 00:20:10.479 "io_error": {} 00:20:10.479 } 00:20:10.479 ] 00:20:10.479 }' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=490893 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=503726592 00:20:10.479 17:24:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:10.479 [global] 00:20:10.479 thread=1 00:20:10.479 invalidate=1 00:20:10.479 rw=randread 00:20:10.479 time_based=1 00:20:10.479 runtime=5 00:20:10.479 ioengine=libaio 00:20:10.479 direct=1 00:20:10.479 bs=1024 00:20:10.479 iodepth=128 00:20:10.479 norandommap=1 00:20:10.479 numjobs=1 00:20:10.479 00:20:10.479 [job0] 00:20:10.479 filename=/dev/sda 00:20:10.479 queue_depth set to 113 (sda) 00:20:10.738 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:10.738 fio-3.35 00:20:10.738 Starting 1 thread 00:20:16.014 00:20:16.014 job0: (groupid=0, jobs=1): err= 0: pid=77706: Mon Jul 22 17:24:34 2024 00:20:16.014 read: IOPS=16.4k, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5007msec) 00:20:16.014 slat (usec): min=2, max=2124, avg=57.97, stdev=229.41 00:20:16.014 clat (usec): min=861, max=14202, avg=7751.45, stdev=630.35 00:20:16.014 lat (usec): min=866, max=14206, avg=7809.43, stdev=611.99 00:20:16.014 clat percentiles (usec): 00:20:16.014 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7242], 00:20:16.014 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:20:16.014 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:20:16.014 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9372], 99.95th=[12125], 00:20:16.014 | 99.99th=[14091] 00:20:16.014 bw ( KiB/s): min=16222, max=16418, per=99.98%, avg=16381.20, stdev=58.16, samples=10 00:20:16.014 iops : min=16222, max=16418, avg=16381.20, stdev=58.16, samples=10 00:20:16.014 lat (usec) : 1000=0.01% 00:20:16.014 lat (msec) : 2=0.04%, 4=0.04%, 10=99.82%, 20=0.09% 00:20:16.014 cpu : usr=5.81%, sys=10.05%, ctx=44721, majf=0, minf=32 00:20:16.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.014 issued rwts: total=82033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.014 00:20:16.014 Run status group 0 (all jobs): 00:20:16.014 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=80.1MiB (84.0MB), run=5007-5007msec 00:20:16.014 00:20:16.014 Disk stats (read/write): 00:20:16.014 sda: ios=80150/0, merge=0/0, ticks=533917/0, in_queue=533917, util=98.13% 00:20:16.014 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:16.014 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.014 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:16.014 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.014 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:16.014 "tick_rate": 2200000000, 00:20:16.014 "ticks": 2512092623160, 00:20:16.014 "bdevs": [ 00:20:16.014 { 00:20:16.014 "name": "Malloc0", 00:20:16.014 "bytes_read": 587728384, 00:20:16.014 "num_read_ops": 572926, 00:20:16.014 "bytes_written": 0, 00:20:16.014 "num_write_ops": 0, 00:20:16.014 "bytes_unmapped": 0, 00:20:16.014 "num_unmap_ops": 0, 00:20:16.014 "bytes_copied": 0, 00:20:16.014 "num_copy_ops": 0, 00:20:16.014 "read_latency_ticks": 1872432564605, 00:20:16.014 "max_read_latency_ticks": 9336466, 00:20:16.014 "min_read_latency_ticks": 18672, 00:20:16.014 "write_latency_ticks": 0, 00:20:16.014 "max_write_latency_ticks": 0, 00:20:16.015 "min_write_latency_ticks": 0, 00:20:16.015 "unmap_latency_ticks": 0, 00:20:16.015 "max_unmap_latency_ticks": 0, 00:20:16.015 "min_unmap_latency_ticks": 0, 00:20:16.015 "copy_latency_ticks": 0, 00:20:16.015 "max_copy_latency_ticks": 0, 00:20:16.015 "min_copy_latency_ticks": 0, 00:20:16.015 "io_error": {} 00:20:16.015 } 00:20:16.015 ] 00:20:16.015 }' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=572926 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=587728384 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=16406 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=16800358 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 16800358 16777216 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=16800358 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=16777216 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:16.015 "tick_rate": 2200000000, 00:20:16.015 "ticks": 2512391867474, 00:20:16.015 "bdevs": [ 00:20:16.015 { 00:20:16.015 "name": "Malloc0", 00:20:16.015 "bytes_read": 587728384, 00:20:16.015 "num_read_ops": 572926, 00:20:16.015 "bytes_written": 0, 00:20:16.015 "num_write_ops": 0, 00:20:16.015 "bytes_unmapped": 0, 00:20:16.015 "num_unmap_ops": 0, 00:20:16.015 "bytes_copied": 0, 00:20:16.015 "num_copy_ops": 0, 00:20:16.015 "read_latency_ticks": 1872432564605, 00:20:16.015 "max_read_latency_ticks": 9336466, 00:20:16.015 "min_read_latency_ticks": 18672, 00:20:16.015 "write_latency_ticks": 0, 00:20:16.015 "max_write_latency_ticks": 0, 00:20:16.015 "min_write_latency_ticks": 0, 00:20:16.015 "unmap_latency_ticks": 0, 00:20:16.015 "max_unmap_latency_ticks": 0, 00:20:16.015 "min_unmap_latency_ticks": 0, 00:20:16.015 "copy_latency_ticks": 0, 00:20:16.015 "max_copy_latency_ticks": 0, 00:20:16.015 "min_copy_latency_ticks": 0, 00:20:16.015 "io_error": {} 00:20:16.015 } 00:20:16.015 ] 00:20:16.015 }' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=572926 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=587728384 00:20:16.015 17:24:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:16.272 [global] 00:20:16.272 thread=1 00:20:16.272 invalidate=1 00:20:16.272 rw=randread 00:20:16.272 time_based=1 00:20:16.272 runtime=5 00:20:16.272 ioengine=libaio 00:20:16.272 direct=1 00:20:16.272 bs=1024 00:20:16.272 iodepth=128 00:20:16.272 norandommap=1 00:20:16.272 numjobs=1 00:20:16.272 00:20:16.272 [job0] 00:20:16.272 filename=/dev/sda 00:20:16.272 queue_depth set to 113 (sda) 00:20:16.272 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:16.272 fio-3.35 00:20:16.272 Starting 1 thread 00:20:21.545 00:20:21.545 job0: (groupid=0, jobs=1): err= 0: pid=77801: Mon Jul 22 17:24:40 2024 00:20:21.545 read: IOPS=32.8k, BW=32.1MiB/s (33.6MB/s)(160MiB/5004msec) 00:20:21.545 slat (nsec): min=1853, max=629479, avg=28379.02, stdev=89348.48 00:20:21.545 clat (usec): min=1156, max=6459, avg=3867.53, stdev=165.02 00:20:21.545 lat (usec): min=1163, max=6466, avg=3895.91, stdev=140.12 00:20:21.545 clat percentiles (usec): 00:20:21.545 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3752], 00:20:21.545 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:20:21.545 | 70.00th=[ 3949], 80.00th=[ 3982], 90.00th=[ 4047], 95.00th=[ 4146], 00:20:21.545 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 4359], 99.95th=[ 4490], 00:20:21.545 | 99.99th=[ 6063] 00:20:21.545 bw ( KiB/s): min=32448, max=33238, per=100.00%, avg=32861.11, stdev=307.38, samples=9 00:20:21.545 iops : min=32448, max=33238, avg=32861.11, stdev=307.38, samples=9 00:20:21.545 lat (msec) : 2=0.02%, 4=82.73%, 10=17.25% 00:20:21.545 cpu : usr=7.34%, sys=14.83%, ctx=92353, majf=0, minf=32 00:20:21.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:21.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.545 issued rwts: total=164323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.545 00:20:21.545 Run status group 0 (all jobs): 00:20:21.545 READ: bw=32.1MiB/s (33.6MB/s), 32.1MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=160MiB (168MB), run=5004-5004msec 00:20:21.545 00:20:21.545 Disk stats (read/write): 00:20:21.545 sda: ios=160625/0, merge=0/0, ticks=534945/0, in_queue=534945, util=98.14% 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:21.545 "tick_rate": 2200000000, 00:20:21.545 "ticks": 2524308706656, 00:20:21.545 "bdevs": [ 00:20:21.545 { 00:20:21.545 "name": "Malloc0", 00:20:21.545 "bytes_read": 755995136, 00:20:21.545 "num_read_ops": 737249, 00:20:21.545 "bytes_written": 0, 00:20:21.545 "num_write_ops": 0, 00:20:21.545 "bytes_unmapped": 0, 00:20:21.545 "num_unmap_ops": 0, 00:20:21.545 "bytes_copied": 0, 00:20:21.545 "num_copy_ops": 0, 00:20:21.545 "read_latency_ticks": 1928058705567, 00:20:21.545 "max_read_latency_ticks": 9336466, 00:20:21.545 "min_read_latency_ticks": 18672, 00:20:21.545 "write_latency_ticks": 0, 00:20:21.545 "max_write_latency_ticks": 0, 00:20:21.545 "min_write_latency_ticks": 0, 00:20:21.545 "unmap_latency_ticks": 0, 00:20:21.545 "max_unmap_latency_ticks": 0, 00:20:21.545 "min_unmap_latency_ticks": 0, 00:20:21.545 "copy_latency_ticks": 0, 00:20:21.545 "max_copy_latency_ticks": 0, 00:20:21.545 "min_copy_latency_ticks": 0, 00:20:21.545 "io_error": {} 00:20:21.545 } 00:20:21.545 ] 00:20:21.545 }' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=737249 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=755995136 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=32864 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=33653350 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 33653350 -gt 16777216 ']' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 16 --r_mbytes_per_sec 8 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:20:21.545 "tick_rate": 2200000000, 00:20:21.545 "ticks": 2524590322532, 00:20:21.545 "bdevs": [ 00:20:21.545 { 00:20:21.545 "name": "Malloc0", 00:20:21.545 "bytes_read": 755995136, 00:20:21.545 "num_read_ops": 737249, 00:20:21.545 "bytes_written": 0, 00:20:21.545 "num_write_ops": 0, 00:20:21.545 "bytes_unmapped": 0, 00:20:21.545 "num_unmap_ops": 0, 00:20:21.545 "bytes_copied": 0, 00:20:21.545 "num_copy_ops": 0, 00:20:21.545 "read_latency_ticks": 1928058705567, 00:20:21.545 "max_read_latency_ticks": 9336466, 00:20:21.545 "min_read_latency_ticks": 18672, 00:20:21.545 "write_latency_ticks": 0, 00:20:21.545 "max_write_latency_ticks": 0, 00:20:21.545 "min_write_latency_ticks": 0, 00:20:21.545 "unmap_latency_ticks": 0, 00:20:21.545 "max_unmap_latency_ticks": 0, 00:20:21.545 "min_unmap_latency_ticks": 0, 00:20:21.545 "copy_latency_ticks": 0, 00:20:21.545 "max_copy_latency_ticks": 0, 00:20:21.545 "min_copy_latency_ticks": 0, 00:20:21.545 "io_error": {} 00:20:21.545 } 00:20:21.545 ] 00:20:21.545 }' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=737249 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=755995136 00:20:21.545 17:24:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:20:21.802 [global] 00:20:21.802 thread=1 00:20:21.802 invalidate=1 00:20:21.802 rw=randread 00:20:21.802 time_based=1 00:20:21.802 runtime=5 00:20:21.802 ioengine=libaio 00:20:21.802 direct=1 00:20:21.802 bs=1024 00:20:21.802 iodepth=128 00:20:21.802 norandommap=1 00:20:21.802 numjobs=1 00:20:21.802 00:20:21.802 [job0] 00:20:21.802 filename=/dev/sda 00:20:21.802 queue_depth set to 113 (sda) 00:20:21.802 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:20:21.802 fio-3.35 00:20:21.802 Starting 1 thread 00:20:27.075 00:20:27.075 job0: (groupid=0, jobs=1): err= 0: pid=77881: Mon Jul 22 17:24:45 2024 00:20:27.075 read: IOPS=8190, BW=8190KiB/s (8387kB/s)(40.1MiB/5014msec) 00:20:27.075 slat (nsec): min=1824, max=1748.6k, avg=117962.69, stdev=302994.72 00:20:27.075 clat (usec): min=2506, max=29599, avg=15505.41, stdev=800.24 00:20:27.075 lat (usec): min=2521, max=29603, avg=15623.37, stdev=795.38 00:20:27.075 clat percentiles (usec): 00:20:27.075 | 1.00th=[14353], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:20:27.075 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:20:27.075 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:20:27.075 | 99.00th=[16581], 99.50th=[16712], 99.90th=[24773], 99.95th=[27395], 00:20:27.075 | 99.99th=[28443] 00:20:27.075 bw ( KiB/s): min= 8098, max= 8208, per=99.96%, avg=8187.60, stdev=34.15, samples=10 00:20:27.075 iops : min= 8098, max= 8208, avg=8187.60, stdev=34.15, samples=10 00:20:27.075 lat (msec) : 4=0.03%, 10=0.14%, 20=99.65%, 50=0.18% 00:20:27.075 cpu : usr=3.51%, sys=6.96%, ctx=24518, majf=0, minf=32 00:20:27.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:27.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.075 issued rwts: total=41065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.075 00:20:27.075 Run status group 0 (all jobs): 00:20:27.075 READ: bw=8190KiB/s (8387kB/s), 8190KiB/s-8190KiB/s (8387kB/s-8387kB/s), io=40.1MiB (42.1MB), run=5014-5014msec 00:20:27.075 00:20:27.075 Disk stats (read/write): 00:20:27.075 sda: ios=40081/0, merge=0/0, ticks=546600/0, in_queue=546600, util=98.14% 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:20:27.075 "tick_rate": 2200000000, 00:20:27.075 "ticks": 2536560199136, 00:20:27.075 "bdevs": [ 00:20:27.075 { 00:20:27.075 "name": "Malloc0", 00:20:27.075 "bytes_read": 798045696, 00:20:27.075 "num_read_ops": 778314, 00:20:27.075 "bytes_written": 0, 00:20:27.075 "num_write_ops": 0, 00:20:27.075 "bytes_unmapped": 0, 00:20:27.075 "num_unmap_ops": 0, 00:20:27.075 "bytes_copied": 0, 00:20:27.075 "num_copy_ops": 0, 00:20:27.075 "read_latency_ticks": 2579599605459, 00:20:27.075 "max_read_latency_ticks": 18013867, 00:20:27.075 "min_read_latency_ticks": 18672, 00:20:27.075 "write_latency_ticks": 0, 00:20:27.075 "max_write_latency_ticks": 0, 00:20:27.075 "min_write_latency_ticks": 0, 00:20:27.075 "unmap_latency_ticks": 0, 00:20:27.075 "max_unmap_latency_ticks": 0, 00:20:27.075 "min_unmap_latency_ticks": 0, 00:20:27.075 "copy_latency_ticks": 0, 00:20:27.075 "max_copy_latency_ticks": 0, 00:20:27.075 "min_copy_latency_ticks": 0, 00:20:27.075 "io_error": {} 00:20:27.075 } 00:20:27.075 ] 00:20:27.075 }' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=778314 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=798045696 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=8213 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=8410112 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 8410112 8388608 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=8410112 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=8388608 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:20:27.075 I/O bandwidth limiting tests successful 00:20:27.075 Cleaning up iSCSI connection 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:27.075 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:27.075 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:27.076 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:27.076 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:27.076 17:24:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 77271 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 77271 ']' 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 77271 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.076 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77271 00:20:27.334 killing process with pid 77271 00:20:27.334 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.334 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.334 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77271' 00:20:27.334 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 77271 00:20:27.334 17:24:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 77271 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:29.863 00:20:29.863 real 0m44.336s 00:20:29.863 user 0m38.097s 00:20:29.863 sys 0m12.307s 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:20:29.863 ************************************ 00:20:29.863 END TEST iscsi_tgt_qos 00:20:29.863 ************************************ 00:20:29.863 17:24:48 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:29.863 17:24:48 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:20:29.863 17:24:48 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:29.863 17:24:48 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.863 17:24:48 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:29.863 ************************************ 00:20:29.863 START TEST iscsi_tgt_ip_migration 00:20:29.863 ************************************ 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:20:29.863 * Looking for test storage... 00:20:29.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:29.863 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:20:29.864 #define SPDK_CONFIG_H 00:20:29.864 #define SPDK_CONFIG_APPS 1 00:20:29.864 #define SPDK_CONFIG_ARCH native 00:20:29.864 #define SPDK_CONFIG_ASAN 1 00:20:29.864 #undef SPDK_CONFIG_AVAHI 00:20:29.864 #undef SPDK_CONFIG_CET 00:20:29.864 #define SPDK_CONFIG_COVERAGE 1 00:20:29.864 #define SPDK_CONFIG_CROSS_PREFIX 00:20:29.864 #undef SPDK_CONFIG_CRYPTO 00:20:29.864 #undef SPDK_CONFIG_CRYPTO_MLX5 00:20:29.864 #undef SPDK_CONFIG_CUSTOMOCF 00:20:29.864 #undef SPDK_CONFIG_DAOS 00:20:29.864 #define SPDK_CONFIG_DAOS_DIR 00:20:29.864 #define SPDK_CONFIG_DEBUG 1 00:20:29.864 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:20:29.864 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:29.864 #define SPDK_CONFIG_DPDK_INC_DIR 00:20:29.864 #define SPDK_CONFIG_DPDK_LIB_DIR 00:20:29.864 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:20:29.864 #undef SPDK_CONFIG_DPDK_UADK 00:20:29.864 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:29.864 #define SPDK_CONFIG_EXAMPLES 1 00:20:29.864 #undef SPDK_CONFIG_FC 00:20:29.864 #define SPDK_CONFIG_FC_PATH 00:20:29.864 #define SPDK_CONFIG_FIO_PLUGIN 1 00:20:29.864 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:20:29.864 #undef SPDK_CONFIG_FUSE 00:20:29.864 #undef SPDK_CONFIG_FUZZER 00:20:29.864 #define SPDK_CONFIG_FUZZER_LIB 00:20:29.864 #undef SPDK_CONFIG_GOLANG 00:20:29.864 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:20:29.864 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:20:29.864 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:20:29.864 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:20:29.864 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:20:29.864 #undef SPDK_CONFIG_HAVE_LIBBSD 00:20:29.864 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:20:29.864 #define SPDK_CONFIG_IDXD 1 00:20:29.864 #define SPDK_CONFIG_IDXD_KERNEL 1 00:20:29.864 #undef SPDK_CONFIG_IPSEC_MB 00:20:29.864 #define SPDK_CONFIG_IPSEC_MB_DIR 00:20:29.864 #define SPDK_CONFIG_ISAL 1 00:20:29.864 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:20:29.864 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:20:29.864 #define SPDK_CONFIG_LIBDIR 00:20:29.864 #undef SPDK_CONFIG_LTO 00:20:29.864 #define SPDK_CONFIG_MAX_LCORES 128 00:20:29.864 #define SPDK_CONFIG_NVME_CUSE 1 00:20:29.864 #undef SPDK_CONFIG_OCF 00:20:29.864 #define SPDK_CONFIG_OCF_PATH 00:20:29.864 #define SPDK_CONFIG_OPENSSL_PATH 00:20:29.864 #undef SPDK_CONFIG_PGO_CAPTURE 00:20:29.864 #define SPDK_CONFIG_PGO_DIR 00:20:29.864 #undef SPDK_CONFIG_PGO_USE 00:20:29.864 #define SPDK_CONFIG_PREFIX /usr/local 00:20:29.864 #undef SPDK_CONFIG_RAID5F 00:20:29.864 #define SPDK_CONFIG_RBD 1 00:20:29.864 #define SPDK_CONFIG_RDMA 1 00:20:29.864 #define SPDK_CONFIG_RDMA_PROV verbs 00:20:29.864 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:20:29.864 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:20:29.864 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:20:29.864 #define SPDK_CONFIG_SHARED 1 00:20:29.864 #undef SPDK_CONFIG_SMA 00:20:29.864 #define SPDK_CONFIG_TESTS 1 00:20:29.864 #undef SPDK_CONFIG_TSAN 00:20:29.864 #define SPDK_CONFIG_UBLK 1 00:20:29.864 #define SPDK_CONFIG_UBSAN 1 00:20:29.864 #undef SPDK_CONFIG_UNIT_TESTS 00:20:29.864 #undef SPDK_CONFIG_URING 00:20:29.864 #define SPDK_CONFIG_URING_PATH 00:20:29.864 #undef SPDK_CONFIG_URING_ZNS 00:20:29.864 #undef SPDK_CONFIG_USDT 00:20:29.864 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:20:29.864 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:20:29.864 #undef SPDK_CONFIG_VFIO_USER 00:20:29.864 #define SPDK_CONFIG_VFIO_USER_DIR 00:20:29.864 #define SPDK_CONFIG_VHOST 1 00:20:29.864 #define SPDK_CONFIG_VIRTIO 1 00:20:29.864 #undef SPDK_CONFIG_VTUNE 00:20:29.864 #define SPDK_CONFIG_VTUNE_DIR 00:20:29.864 #define SPDK_CONFIG_WERROR 1 00:20:29.864 #define SPDK_CONFIG_WPDK_DIR 00:20:29.864 #undef SPDK_CONFIG_XNVME 00:20:29.864 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:20:29.864 Running ip migration tests 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:20:29.864 Process pid: 78036 00:20:29.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=78036 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 78036' 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 78036 /var/tmp/spdk0.sock 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 78036 ']' 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.864 17:24:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:30.123 [2024-07-22 17:24:48.888898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:30.123 [2024-07-22 17:24:48.889338] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78036 ] 00:20:30.123 [2024-07-22 17:24:49.054373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.381 [2024-07-22 17:24:49.324570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.947 17:24:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:31.893 iscsi_tgt is listening. Running tests... 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:31.893 Malloc0 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:20:31.893 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:20:31.894 Process pid: 78079 00:20:31.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=78079 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 78079' 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 78079 /var/tmp/spdk1.sock 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 78079 ']' 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.894 17:24:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:32.171 [2024-07-22 17:24:50.922691] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:32.171 [2024-07-22 17:24:50.923043] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78079 ] 00:20:32.171 [2024-07-22 17:24:51.084704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.429 [2024-07-22 17:24:51.378335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.996 17:24:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 iscsi_tgt is listening. Running tests... 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 Malloc0 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:20:33.931 17:24:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:20:34.904 17:24:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:20:34.904 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:20:34.904 17:24:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:20:35.840 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:20:35.840 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:35.840 [2024-07-22 17:24:54.776401] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=78160 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:20:35.840 17:24:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:20:36.097 [global] 00:20:36.097 thread=1 00:20:36.097 invalidate=1 00:20:36.097 rw=randrw 00:20:36.097 time_based=1 00:20:36.097 runtime=12 00:20:36.097 ioengine=libaio 00:20:36.097 direct=1 00:20:36.097 bs=4096 00:20:36.097 iodepth=32 00:20:36.097 norandommap=1 00:20:36.097 numjobs=1 00:20:36.097 00:20:36.097 [job0] 00:20:36.097 filename=/dev/sda 00:20:36.097 queue_depth set to 113 (sda) 00:20:36.097 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:20:36.097 fio-3.35 00:20:36.097 Starting 1 thread 00:20:36.097 [2024-07-22 17:24:54.951407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:39.379 17:24:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:20:39.379 17:24:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.379 17:24:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:39.945 17:24:58 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.945 17:24:58 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 78036 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.318 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:20:41.576 17:25:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 78160 00:20:48.137 [2024-07-22 17:25:07.062975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:48.396 00:20:48.396 job0: (groupid=0, jobs=1): err= 0: pid=78187: Mon Jul 22 17:25:07 2024 00:20:48.396 read: IOPS=7687, BW=30.0MiB/s (31.5MB/s)(360MiB/12001msec) 00:20:48.396 slat (usec): min=2, max=491, avg= 6.07, stdev= 6.86 00:20:48.396 clat (usec): min=297, max=5007.5k, avg=1978.14, stdev=61662.38 00:20:48.396 lat (usec): min=310, max=5007.5k, avg=1984.20, stdev=61662.45 00:20:48.396 clat percentiles (usec): 00:20:48.396 | 1.00th=[ 791], 5.00th=[ 914], 10.00th=[ 988], 00:20:48.396 | 20.00th=[ 1074], 30.00th=[ 1106], 40.00th=[ 1156], 00:20:48.396 | 50.00th=[ 1188], 60.00th=[ 1237], 70.00th=[ 1287], 00:20:48.396 | 80.00th=[ 1385], 90.00th=[ 1516], 95.00th=[ 1598], 00:20:48.396 | 99.00th=[ 1745], 99.50th=[ 1795], 99.90th=[ 1909], 00:20:48.396 | 99.95th=[ 2008], 99.99th=[4999611] 00:20:48.396 bw ( KiB/s): min=25544, max=54272, per=100.00%, avg=49159.43, stdev=9547.41, samples=14 00:20:48.396 iops : min= 6386, max=13568, avg=12289.86, stdev=2386.85, samples=14 00:20:48.396 write: IOPS=7662, BW=29.9MiB/s (31.4MB/s)(359MiB/12001msec); 0 zone resets 00:20:48.396 slat (nsec): min=1987, max=425210, avg=6643.52, stdev=8239.52 00:20:48.396 clat (usec): min=243, max=5007.5k, avg=2177.05, stdev=70029.38 00:20:48.396 lat (usec): min=264, max=5007.5k, avg=2183.69, stdev=70029.47 00:20:48.396 clat percentiles (usec): 00:20:48.396 | 1.00th=[ 758], 5.00th=[ 889], 10.00th=[ 955], 00:20:48.396 | 20.00th=[ 1037], 30.00th=[ 1090], 40.00th=[ 1123], 00:20:48.396 | 50.00th=[ 1156], 60.00th=[ 1221], 70.00th=[ 1287], 00:20:48.396 | 80.00th=[ 1385], 90.00th=[ 1500], 95.00th=[ 1582], 00:20:48.396 | 99.00th=[ 1713], 99.50th=[ 1745], 99.90th=[ 1876], 00:20:48.396 | 99.95th=[ 1991], 99.99th=[4999611] 00:20:48.396 bw ( KiB/s): min=25176, max=54848, per=100.00%, avg=48965.71, stdev=9682.19, samples=14 00:20:48.396 iops : min= 6294, max=13712, avg=12241.43, stdev=2420.55, samples=14 00:20:48.396 lat (usec) : 250=0.01%, 500=0.02%, 750=0.59%, 1000=12.56% 00:20:48.396 lat (msec) : 2=86.78%, 4=0.03%, >=2000=0.02% 00:20:48.396 cpu : usr=4.07%, sys=7.67%, ctx=29666, majf=0, minf=1 00:20:48.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:20:48.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:48.396 issued rwts: total=92257,91962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.396 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:48.396 00:20:48.396 Run status group 0 (all jobs): 00:20:48.396 READ: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=360MiB (378MB), run=12001-12001msec 00:20:48.396 WRITE: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=359MiB (377MB), run=12001-12001msec 00:20:48.396 00:20:48.396 Disk stats (read/write): 00:20:48.396 sda: ios=90890/90551, merge=0/0, ticks=170973/192713, in_queue=363687, util=99.37% 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:20:48.396 Cleaning up iSCSI connection 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:48.396 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:20:48.396 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.396 17:25:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:49.393 17:25:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.393 17:25:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 78079 00:20:50.768 17:25:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:20:50.768 17:25:09 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:50.768 00:20:50.768 real 0m20.951s 00:20:50.768 user 0m28.851s 00:20:50.768 sys 0m3.380s 00:20:50.768 17:25:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.768 ************************************ 00:20:50.768 END TEST iscsi_tgt_ip_migration 00:20:50.768 17:25:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:20:50.768 ************************************ 00:20:50.768 17:25:09 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:50.768 17:25:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:20:50.768 17:25:09 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.768 17:25:09 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.768 17:25:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:50.768 ************************************ 00:20:50.768 START TEST iscsi_tgt_trace_record 00:20:50.768 ************************************ 00:20:50.768 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:20:51.027 * Looking for test storage... 00:20:51.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:51.027 start iscsi_tgt with trace enabled 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=78406 00:20:51.027 Process pid: 78406 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 78406' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 78406 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 78406 ']' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.027 17:25:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:51.027 [2024-07-22 17:25:09.925343] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:51.027 [2024-07-22 17:25:09.925558] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78406 ] 00:20:51.286 [2024-07-22 17:25:10.111351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.545 [2024-07-22 17:25:10.409884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:20:51.545 [2024-07-22 17:25:10.409971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 78406' to capture a snapshot of events at runtime. 00:20:51.545 [2024-07-22 17:25:10.409994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.545 [2024-07-22 17:25:10.410008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.545 [2024-07-22 17:25:10.410022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid78406 for offline analysis/debug. 00:20:51.545 [2024-07-22 17:25:10.410224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.545 [2024-07-22 17:25:10.410368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.545 [2024-07-22 17:25:10.411191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.545 [2024-07-22 17:25:10.411217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:20:52.480 iscsi_tgt is listening. Running tests... 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:20:52.480 Trace record pid: 78441 00:20:52.480 Create bdevs and target nodes 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=78441 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 78441' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 78406 -f ./tmp-trace/record.trace -q 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:20:52.480 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.481 17:25:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:20:54.379 Malloc0 00:20:54.379 Malloc1 00:20:54.379 Malloc2 00:20:54.379 Malloc3 00:20:54.379 Malloc4 00:20:54.379 Malloc5 00:20:54.379 Malloc6 00:20:54.379 Malloc7 00:20:54.379 Malloc8 00:20:54.379 Malloc9 00:20:54.379 Malloc10 00:20:54.379 Malloc11 00:20:54.379 Malloc12 00:20:54.379 Malloc13 00:20:54.379 Malloc14 00:20:54.379 Malloc15 00:20:54.379 17:25:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:20:54.947 17:25:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:20:54.947 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:20:54.947 17:25:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:55.224 [2024-07-22 17:25:13.925595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:13.944478] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:13.960673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.002142] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.052803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.067092] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.076047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.128314] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.224 [2024-07-22 17:25:14.148829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.182714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.199002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.228684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.254201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.285774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 [2024-07-22 17:25:14.310295] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:20:55.492 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:20:55.492 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:55.492 [2024-07-22 17:25:14.329284] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.492 Running FIO 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:20:55.492 17:25:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:20:55.492 [global] 00:20:55.492 thread=1 00:20:55.492 invalidate=1 00:20:55.492 rw=randrw 00:20:55.492 time_based=1 00:20:55.492 runtime=1 00:20:55.492 ioengine=libaio 00:20:55.492 direct=1 00:20:55.492 bs=131072 00:20:55.492 iodepth=32 00:20:55.492 norandommap=1 00:20:55.492 numjobs=1 00:20:55.492 00:20:55.492 [job0] 00:20:55.492 filename=/dev/sda 00:20:55.492 [job1] 00:20:55.492 filename=/dev/sdb 00:20:55.492 [job2] 00:20:55.492 filename=/dev/sdc 00:20:55.492 [job3] 00:20:55.492 filename=/dev/sde 00:20:55.492 [job4] 00:20:55.492 filename=/dev/sdd 00:20:55.492 [job5] 00:20:55.492 filename=/dev/sdf 00:20:55.750 [job6] 00:20:55.750 filename=/dev/sdg 00:20:55.750 [job7] 00:20:55.750 filename=/dev/sdh 00:20:55.750 [job8] 00:20:55.750 filename=/dev/sdi 00:20:55.750 [job9] 00:20:55.750 filename=/dev/sdj 00:20:55.750 [job10] 00:20:55.750 filename=/dev/sdk 00:20:55.750 [job11] 00:20:55.750 filename=/dev/sdl 00:20:55.750 [job12] 00:20:55.750 filename=/dev/sdm 00:20:55.750 [job13] 00:20:55.750 filename=/dev/sdn 00:20:55.750 [job14] 00:20:55.750 filename=/dev/sdo 00:20:55.750 [job15] 00:20:55.750 filename=/dev/sdp 00:20:55.750 queue_depth set to 113 (sda) 00:20:55.750 queue_depth set to 113 (sdb) 00:20:55.750 queue_depth set to 113 (sdc) 00:20:55.750 queue_depth set to 113 (sde) 00:20:55.750 queue_depth set to 113 (sdd) 00:20:56.008 queue_depth set to 113 (sdf) 00:20:56.008 queue_depth set to 113 (sdg) 00:20:56.008 queue_depth set to 113 (sdh) 00:20:56.008 queue_depth set to 113 (sdi) 00:20:56.008 queue_depth set to 113 (sdj) 00:20:56.008 queue_depth set to 113 (sdk) 00:20:56.008 queue_depth set to 113 (sdl) 00:20:56.008 queue_depth set to 113 (sdm) 00:20:56.008 queue_depth set to 113 (sdn) 00:20:56.008 queue_depth set to 113 (sdo) 00:20:56.008 queue_depth set to 113 (sdp) 00:20:56.266 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:20:56.266 fio-3.35 00:20:56.266 Starting 16 threads 00:20:56.266 [2024-07-22 17:25:15.031324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.034411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.036888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.040729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.043497] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.045856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.048276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.050999] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.053472] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.056002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.058547] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.061406] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.064158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.066992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.069695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:56.266 [2024-07-22 17:25:15.073078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.386417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.397360] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.400073] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.403180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.406559] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.409013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.411950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.415043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.417327] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.419516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.421698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.423812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.426636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.429007] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 [2024-07-22 17:25:16.431256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.652 00:20:57.652 job0: (groupid=0, jobs=1): err= 0: pid=78823: Mon Jul 22 17:25:16 2024 00:20:57.652 read: IOPS=428, BW=53.6MiB/s (56.2MB/s)(55.8MiB/1041msec) 00:20:57.652 slat (usec): min=6, max=500, avg=24.85, stdev=43.45 00:20:57.652 clat (usec): min=1595, max=48015, avg=9009.34, stdev=3331.54 00:20:57.652 lat (usec): min=1636, max=48029, avg=9034.19, stdev=3329.32 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[ 3163], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8225], 00:20:57.652 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.652 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10290], 00:20:57.652 | 99.00th=[14353], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:20:57.652 | 99.99th=[47973] 00:20:57.652 bw ( KiB/s): min=55808, max=57600, per=6.22%, avg=56704.00, stdev=1267.14, samples=2 00:20:57.652 iops : min= 436, max= 450, avg=443.00, stdev= 9.90, samples=2 00:20:57.652 write: IOPS=464, BW=58.1MiB/s (60.9MB/s)(60.5MiB/1041msec); 0 zone resets 00:20:57.652 slat (usec): min=7, max=511, avg=33.29, stdev=56.08 00:20:57.652 clat (msec): min=8, max=101, avg=60.30, stdev= 8.64 00:20:57.652 lat (msec): min=8, max=101, avg=60.34, stdev= 8.65 00:20:57.652 clat percentiles (msec): 00:20:57.652 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 58], 00:20:57.652 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 62], 00:20:57.652 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 66], 95.00th=[ 67], 00:20:57.652 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:20:57.652 | 99.99th=[ 102] 00:20:57.652 bw ( KiB/s): min=57856, max=58624, per=6.23%, avg=58240.00, stdev=543.06, samples=2 00:20:57.652 iops : min= 452, max= 458, avg=455.00, stdev= 4.24, samples=2 00:20:57.652 lat (msec) : 2=0.22%, 4=0.32%, 10=43.98%, 20=3.55%, 50=2.37% 00:20:57.652 lat (msec) : 100=49.46%, 250=0.11% 00:20:57.652 cpu : usr=0.48%, sys=1.83%, ctx=837, majf=0, minf=1 00:20:57.652 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.652 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.652 issued rwts: total=446,484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.652 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.652 job1: (groupid=0, jobs=1): err= 0: pid=78824: Mon Jul 22 17:25:16 2024 00:20:57.652 read: IOPS=411, BW=51.4MiB/s (53.9MB/s)(53.6MiB/1043msec) 00:20:57.652 slat (usec): min=6, max=821, avg=26.50, stdev=59.99 00:20:57.652 clat (usec): min=734, max=47909, avg=9286.21, stdev=3094.00 00:20:57.652 lat (usec): min=811, max=47919, avg=9312.71, stdev=3090.37 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[ 6652], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8291], 00:20:57.652 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:20:57.652 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[12780], 00:20:57.652 | 99.00th=[16188], 99.50th=[16188], 99.90th=[47973], 99.95th=[47973], 00:20:57.652 | 99.99th=[47973] 00:20:57.652 bw ( KiB/s): min=52630, max=56689, per=5.99%, avg=54659.50, stdev=2870.15, samples=2 00:20:57.652 iops : min= 411, max= 442, avg=426.50, stdev=21.92, samples=2 00:20:57.652 write: IOPS=470, BW=58.8MiB/s (61.7MB/s)(61.4MiB/1043msec); 0 zone resets 00:20:57.652 slat (usec): min=8, max=730, avg=32.16, stdev=70.09 00:20:57.652 clat (usec): min=3970, max=98010, avg=59629.29, stdev=11682.16 00:20:57.652 lat (usec): min=3988, max=98023, avg=59661.45, stdev=11683.40 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[ 3982], 5.00th=[43779], 10.00th=[54789], 20.00th=[57410], 00:20:57.652 | 30.00th=[58983], 40.00th=[60031], 50.00th=[61080], 60.00th=[62129], 00:20:57.652 | 70.00th=[63701], 80.00th=[65274], 90.00th=[66847], 95.00th=[68682], 00:20:57.652 | 99.00th=[88605], 99.50th=[92799], 99.90th=[98042], 99.95th=[98042], 00:20:57.652 | 99.99th=[98042] 00:20:57.652 bw ( KiB/s): min=58484, max=59784, per=6.33%, avg=59134.00, stdev=919.24, samples=2 00:20:57.652 iops : min= 456, max= 467, avg=461.50, stdev= 7.78, samples=2 00:20:57.652 lat (usec) : 750=0.11% 00:20:57.652 lat (msec) : 4=0.65%, 10=41.52%, 20=5.87%, 50=1.96%, 100=49.89% 00:20:57.652 cpu : usr=0.86%, sys=1.25%, ctx=905, majf=0, minf=1 00:20:57.652 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.6%, >=64=0.0% 00:20:57.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.652 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.652 issued rwts: total=429,491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.652 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.652 job2: (groupid=0, jobs=1): err= 0: pid=78839: Mon Jul 22 17:25:16 2024 00:20:57.652 read: IOPS=454, BW=56.8MiB/s (59.6MB/s)(58.9MiB/1036msec) 00:20:57.652 slat (usec): min=6, max=1091, avg=19.48, stdev=53.14 00:20:57.652 clat (usec): min=3067, max=42981, avg=9398.89, stdev=2676.23 00:20:57.652 lat (usec): min=3075, max=42993, avg=9418.37, stdev=2674.52 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[ 3458], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8717], 00:20:57.652 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:20:57.652 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10421], 00:20:57.652 | 99.00th=[12780], 99.50th=[36963], 99.90th=[42730], 99.95th=[42730], 00:20:57.652 | 99.99th=[42730] 00:20:57.652 bw ( KiB/s): min=57740, max=62076, per=6.57%, avg=59908.00, stdev=3066.02, samples=2 00:20:57.652 iops : min= 451, max= 484, avg=467.50, stdev=23.33, samples=2 00:20:57.652 write: IOPS=446, BW=55.9MiB/s (58.6MB/s)(57.9MiB/1036msec); 0 zone resets 00:20:57.652 slat (usec): min=7, max=1661, avg=24.09, stdev=77.55 00:20:57.652 clat (usec): min=9360, max=95964, avg=61900.46, stdev=9109.85 00:20:57.652 lat (usec): min=9393, max=95974, avg=61924.55, stdev=9110.96 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[20317], 5.00th=[48497], 10.00th=[55313], 20.00th=[58983], 00:20:57.652 | 30.00th=[60556], 40.00th=[61604], 50.00th=[62653], 60.00th=[64226], 00:20:57.652 | 70.00th=[65274], 80.00th=[66847], 90.00th=[68682], 95.00th=[69731], 00:20:57.652 | 99.00th=[89654], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:20:57.652 | 99.99th=[95945] 00:20:57.652 bw ( KiB/s): min=54637, max=56463, per=5.94%, avg=55550.00, stdev=1291.18, samples=2 00:20:57.652 iops : min= 426, max= 441, avg=433.50, stdev=10.61, samples=2 00:20:57.652 lat (msec) : 4=0.54%, 10=43.90%, 20=6.10%, 50=2.46%, 100=47.00% 00:20:57.652 cpu : usr=0.68%, sys=1.35%, ctx=902, majf=0, minf=1 00:20:57.652 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.652 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.652 issued rwts: total=471,463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.652 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.652 job3: (groupid=0, jobs=1): err= 0: pid=78845: Mon Jul 22 17:25:16 2024 00:20:57.652 read: IOPS=434, BW=54.3MiB/s (56.9MB/s)(56.2MiB/1036msec) 00:20:57.652 slat (usec): min=6, max=1941, avg=28.81, stdev=99.11 00:20:57.652 clat (usec): min=2528, max=42497, avg=9090.81, stdev=3160.05 00:20:57.652 lat (usec): min=2537, max=42529, avg=9119.62, stdev=3160.06 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[ 3261], 5.00th=[ 7570], 10.00th=[ 7767], 20.00th=[ 8029], 00:20:57.652 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.652 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[11207], 00:20:57.652 | 99.00th=[18744], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:20:57.652 | 99.99th=[42730] 00:20:57.652 bw ( KiB/s): min=56320, max=58112, per=6.27%, avg=57216.00, stdev=1267.14, samples=2 00:20:57.652 iops : min= 440, max= 454, avg=447.00, stdev= 9.90, samples=2 00:20:57.652 write: IOPS=466, BW=58.3MiB/s (61.1MB/s)(60.4MiB/1036msec); 0 zone resets 00:20:57.652 slat (usec): min=7, max=1282, avg=31.39, stdev=79.45 00:20:57.652 clat (usec): min=10456, max=91595, avg=59986.06, stdev=7867.87 00:20:57.652 lat (usec): min=10485, max=91610, avg=60017.45, stdev=7866.76 00:20:57.652 clat percentiles (usec): 00:20:57.652 | 1.00th=[24773], 5.00th=[51643], 10.00th=[54264], 20.00th=[56886], 00:20:57.652 | 30.00th=[57934], 40.00th=[58983], 50.00th=[60031], 60.00th=[61080], 00:20:57.653 | 70.00th=[62653], 80.00th=[63701], 90.00th=[66847], 95.00th=[70779], 00:20:57.653 | 99.00th=[85459], 99.50th=[87557], 99.90th=[91751], 99.95th=[91751], 00:20:57.653 | 99.99th=[91751] 00:20:57.653 bw ( KiB/s): min=55296, max=61184, per=6.23%, avg=58240.00, stdev=4163.44, samples=2 00:20:57.653 iops : min= 432, max= 478, avg=455.00, stdev=32.53, samples=2 00:20:57.653 lat (msec) : 4=0.54%, 10=42.02%, 20=5.68%, 50=2.04%, 100=49.73% 00:20:57.653 cpu : usr=0.58%, sys=1.84%, ctx=883, majf=0, minf=1 00:20:57.653 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.653 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.653 issued rwts: total=450,483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.653 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.653 job4: (groupid=0, jobs=1): err= 0: pid=78846: Mon Jul 22 17:25:16 2024 00:20:57.653 read: IOPS=452, BW=56.6MiB/s (59.3MB/s)(58.8MiB/1038msec) 00:20:57.653 slat (usec): min=8, max=976, avg=30.56, stdev=69.40 00:20:57.653 clat (usec): min=2916, max=43575, avg=9155.56, stdev=3439.23 00:20:57.653 lat (usec): min=2926, max=43587, avg=9186.12, stdev=3437.24 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[ 4047], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8225], 00:20:57.653 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.653 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[11076], 00:20:57.653 | 99.00th=[38011], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:20:57.653 | 99.99th=[43779] 00:20:57.653 bw ( KiB/s): min=54784, max=64384, per=6.53%, avg=59584.00, stdev=6788.23, samples=2 00:20:57.653 iops : min= 428, max= 503, avg=465.50, stdev=53.03, samples=2 00:20:57.653 write: IOPS=465, BW=58.2MiB/s (61.0MB/s)(60.4MiB/1038msec); 0 zone resets 00:20:57.653 slat (usec): min=9, max=1254, avg=30.10, stdev=67.74 00:20:57.653 clat (usec): min=8557, max=92530, avg=59671.90, stdev=7967.77 00:20:57.653 lat (usec): min=8583, max=92556, avg=59702.01, stdev=7968.80 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[20841], 5.00th=[50594], 10.00th=[54264], 20.00th=[56886], 00:20:57.653 | 30.00th=[58459], 40.00th=[58983], 50.00th=[60556], 60.00th=[61080], 00:20:57.653 | 70.00th=[62653], 80.00th=[63701], 90.00th=[65274], 95.00th=[66847], 00:20:57.653 | 99.00th=[83362], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:20:57.653 | 99.99th=[92799] 00:20:57.653 bw ( KiB/s): min=57971, max=58880, per=6.25%, avg=58425.50, stdev=642.76, samples=2 00:20:57.653 iops : min= 452, max= 460, avg=456.00, stdev= 5.66, samples=2 00:20:57.653 lat (msec) : 4=0.42%, 10=44.07%, 20=4.72%, 50=2.20%, 100=48.58% 00:20:57.653 cpu : usr=0.77%, sys=1.74%, ctx=832, majf=0, minf=1 00:20:57.653 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.653 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.653 issued rwts: total=470,483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.653 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.653 job5: (groupid=0, jobs=1): err= 0: pid=78847: Mon Jul 22 17:25:16 2024 00:20:57.653 read: IOPS=476, BW=59.6MiB/s (62.5MB/s)(62.1MiB/1042msec) 00:20:57.653 slat (usec): min=6, max=787, avg=20.58, stdev=49.62 00:20:57.653 clat (usec): min=5856, max=48121, avg=9242.72, stdev=3746.78 00:20:57.653 lat (usec): min=5870, max=48130, avg=9263.30, stdev=3744.70 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[ 6718], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8356], 00:20:57.653 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:20:57.653 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:20:57.653 | 99.00th=[41681], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:20:57.653 | 99.99th=[47973] 00:20:57.653 bw ( KiB/s): min=58762, max=67072, per=6.90%, avg=62917.00, stdev=5876.06, samples=2 00:20:57.653 iops : min= 459, max= 524, avg=491.50, stdev=45.96, samples=2 00:20:57.653 write: IOPS=462, BW=57.8MiB/s (60.6MB/s)(60.2MiB/1042msec); 0 zone resets 00:20:57.653 slat (usec): min=8, max=817, avg=31.80, stdev=77.66 00:20:57.653 clat (usec): min=3718, max=97375, avg=59460.02, stdev=9085.11 00:20:57.653 lat (usec): min=3759, max=97397, avg=59491.82, stdev=9082.09 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[11731], 5.00th=[51643], 10.00th=[54264], 20.00th=[56361], 00:20:57.653 | 30.00th=[57410], 40.00th=[58459], 50.00th=[59507], 60.00th=[61080], 00:20:57.653 | 70.00th=[62129], 80.00th=[64226], 90.00th=[66847], 95.00th=[67634], 00:20:57.653 | 99.00th=[86508], 99.50th=[90702], 99.90th=[96994], 99.95th=[96994], 00:20:57.653 | 99.99th=[96994] 00:20:57.653 bw ( KiB/s): min=56832, max=59528, per=6.22%, avg=58180.00, stdev=1906.36, samples=2 00:20:57.653 iops : min= 444, max= 465, avg=454.50, stdev=14.85, samples=2 00:20:57.653 lat (msec) : 4=0.10%, 10=47.60%, 20=3.27%, 50=1.94%, 100=47.09% 00:20:57.653 cpu : usr=0.77%, sys=1.25%, ctx=902, majf=0, minf=1 00:20:57.653 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.8%, >=64=0.0% 00:20:57.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.653 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.653 issued rwts: total=497,482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.653 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.653 job6: (groupid=0, jobs=1): err= 0: pid=78857: Mon Jul 22 17:25:16 2024 00:20:57.653 read: IOPS=439, BW=54.9MiB/s (57.6MB/s)(57.4MiB/1045msec) 00:20:57.653 slat (usec): min=6, max=642, avg=21.67, stdev=42.49 00:20:57.653 clat (usec): min=1749, max=53462, avg=9511.71, stdev=3120.90 00:20:57.653 lat (usec): min=1765, max=53473, avg=9533.38, stdev=3119.13 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[ 2638], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8717], 00:20:57.653 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:20:57.653 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11207], 00:20:57.653 | 99.00th=[14746], 99.50th=[14877], 99.90th=[53216], 99.95th=[53216], 00:20:57.653 | 99.99th=[53216] 00:20:57.653 bw ( KiB/s): min=57202, max=59904, per=6.42%, avg=58553.00, stdev=1910.60, samples=2 00:20:57.653 iops : min= 446, max= 468, avg=457.00, stdev=15.56, samples=2 00:20:57.653 write: IOPS=448, BW=56.1MiB/s (58.8MB/s)(58.6MiB/1045msec); 0 zone resets 00:20:57.653 slat (usec): min=8, max=310, avg=24.68, stdev=27.36 00:20:57.653 clat (msec): min=4, max=108, avg=61.78, stdev=12.02 00:20:57.653 lat (msec): min=4, max=108, avg=61.80, stdev=12.02 00:20:57.653 clat percentiles (msec): 00:20:57.653 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 59], 00:20:57.653 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:20:57.653 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 70], 95.00th=[ 73], 00:20:57.653 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:20:57.653 | 99.99th=[ 109] 00:20:57.653 bw ( KiB/s): min=56064, max=56432, per=6.02%, avg=56248.00, stdev=260.22, samples=2 00:20:57.653 iops : min= 438, max= 440, avg=439.00, stdev= 1.41, samples=2 00:20:57.653 lat (msec) : 2=0.11%, 4=0.43%, 10=42.67%, 20=7.33%, 50=2.05% 00:20:57.653 lat (msec) : 100=47.09%, 250=0.32% 00:20:57.653 cpu : usr=0.38%, sys=1.53%, ctx=893, majf=0, minf=1 00:20:57.653 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.653 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.653 issued rwts: total=459,469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.653 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.653 job7: (groupid=0, jobs=1): err= 0: pid=78926: Mon Jul 22 17:25:16 2024 00:20:57.653 read: IOPS=414, BW=51.9MiB/s (54.4MB/s)(53.9MiB/1039msec) 00:20:57.653 slat (usec): min=7, max=441, avg=22.37, stdev=37.14 00:20:57.653 clat (usec): min=4601, max=44397, avg=9177.82, stdev=3379.14 00:20:57.653 lat (usec): min=4612, max=44417, avg=9200.20, stdev=3377.93 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8094], 00:20:57.653 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.653 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11076], 00:20:57.653 | 99.00th=[17433], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:20:57.653 | 99.99th=[44303] 00:20:57.653 bw ( KiB/s): min=51968, max=57229, per=5.99%, avg=54598.50, stdev=3720.09, samples=2 00:20:57.653 iops : min= 406, max= 447, avg=426.50, stdev=28.99, samples=2 00:20:57.653 write: IOPS=464, BW=58.1MiB/s (60.9MB/s)(60.4MiB/1039msec); 0 zone resets 00:20:57.653 slat (usec): min=9, max=751, avg=31.63, stdev=66.42 00:20:57.653 clat (usec): min=10908, max=93638, avg=60472.86, stdev=7729.79 00:20:57.653 lat (usec): min=10938, max=93658, avg=60504.49, stdev=7730.24 00:20:57.653 clat percentiles (usec): 00:20:57.653 | 1.00th=[25297], 5.00th=[52167], 10.00th=[54264], 20.00th=[56886], 00:20:57.653 | 30.00th=[58459], 40.00th=[60031], 50.00th=[61080], 60.00th=[62129], 00:20:57.653 | 70.00th=[63177], 80.00th=[64750], 90.00th=[66847], 95.00th=[69731], 00:20:57.653 | 99.00th=[83362], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:20:57.653 | 99.99th=[93848] 00:20:57.653 bw ( KiB/s): min=55185, max=61440, per=6.24%, avg=58312.50, stdev=4422.95, samples=2 00:20:57.653 iops : min= 431, max= 480, avg=455.50, stdev=34.65, samples=2 00:20:57.653 lat (msec) : 10=41.03%, 20=6.02%, 50=2.19%, 100=50.77% 00:20:57.653 cpu : usr=0.67%, sys=1.54%, ctx=880, majf=0, minf=1 00:20:57.653 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.6%, >=64=0.0% 00:20:57.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.653 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.653 issued rwts: total=431,483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.653 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.653 job8: (groupid=0, jobs=1): err= 0: pid=78931: Mon Jul 22 17:25:16 2024 00:20:57.654 read: IOPS=416, BW=52.1MiB/s (54.6MB/s)(54.6MiB/1049msec) 00:20:57.654 slat (usec): min=7, max=593, avg=27.56, stdev=54.78 00:20:57.654 clat (usec): min=1534, max=52652, avg=8904.91, stdev=4378.86 00:20:57.654 lat (usec): min=1555, max=52672, avg=8932.47, stdev=4375.50 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[ 1696], 5.00th=[ 4146], 10.00th=[ 7832], 20.00th=[ 8160], 00:20:57.654 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.654 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[11338], 00:20:57.654 | 99.00th=[14222], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:20:57.654 | 99.99th=[52691] 00:20:57.654 bw ( KiB/s): min=51815, max=58997, per=6.07%, avg=55406.00, stdev=5078.44, samples=2 00:20:57.654 iops : min= 404, max= 460, avg=432.00, stdev=39.60, samples=2 00:20:57.654 write: IOPS=465, BW=58.2MiB/s (61.0MB/s)(61.0MiB/1049msec); 0 zone resets 00:20:57.654 slat (usec): min=7, max=907, avg=32.97, stdev=60.87 00:20:57.654 clat (msec): min=2, max=108, avg=60.51, stdev=11.54 00:20:57.654 lat (msec): min=2, max=108, avg=60.54, stdev=11.53 00:20:57.654 clat percentiles (msec): 00:20:57.654 | 1.00th=[ 5], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 58], 00:20:57.654 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:20:57.654 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 70], 00:20:57.654 | 99.00th=[ 93], 99.50th=[ 104], 99.90th=[ 109], 99.95th=[ 109], 00:20:57.654 | 99.99th=[ 109] 00:20:57.654 bw ( KiB/s): min=58484, max=59254, per=6.30%, avg=58869.00, stdev=544.47, samples=2 00:20:57.654 iops : min= 456, max= 462, avg=459.00, stdev= 4.24, samples=2 00:20:57.654 lat (msec) : 2=1.08%, 4=1.51%, 10=41.84%, 20=3.68%, 50=1.41% 00:20:57.654 lat (msec) : 100=50.16%, 250=0.32% 00:20:57.654 cpu : usr=0.67%, sys=1.53%, ctx=920, majf=0, minf=1 00:20:57.654 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.6%, >=64=0.0% 00:20:57.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.654 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.654 issued rwts: total=437,488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.654 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.654 job9: (groupid=0, jobs=1): err= 0: pid=78932: Mon Jul 22 17:25:16 2024 00:20:57.654 read: IOPS=490, BW=61.3MiB/s (64.3MB/s)(63.2MiB/1031msec) 00:20:57.654 slat (usec): min=7, max=998, avg=28.60, stdev=66.32 00:20:57.654 clat (usec): min=1039, max=37163, avg=9024.36, stdev=2506.33 00:20:57.654 lat (usec): min=1048, max=37177, avg=9052.97, stdev=2503.52 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[ 4080], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8225], 00:20:57.654 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:20:57.654 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[11207], 00:20:57.654 | 99.00th=[14353], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:20:57.654 | 99.99th=[36963] 00:20:57.654 bw ( KiB/s): min=58880, max=70028, per=7.07%, avg=64454.00, stdev=7882.83, samples=2 00:20:57.654 iops : min= 460, max= 547, avg=503.50, stdev=61.52, samples=2 00:20:57.654 write: IOPS=469, BW=58.7MiB/s (61.5MB/s)(60.5MiB/1031msec); 0 zone resets 00:20:57.654 slat (usec): min=9, max=1142, avg=36.88, stdev=85.19 00:20:57.654 clat (usec): min=10102, max=82786, avg=58563.02, stdev=7879.27 00:20:57.654 lat (usec): min=10122, max=82820, avg=58599.90, stdev=7881.63 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[22676], 5.00th=[46924], 10.00th=[52691], 20.00th=[55313], 00:20:57.654 | 30.00th=[56886], 40.00th=[58459], 50.00th=[60031], 60.00th=[61080], 00:20:57.654 | 70.00th=[62129], 80.00th=[63177], 90.00th=[64750], 95.00th=[65799], 00:20:57.654 | 99.00th=[76022], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:20:57.654 | 99.99th=[82314] 00:20:57.654 bw ( KiB/s): min=56689, max=59904, per=6.24%, avg=58296.50, stdev=2273.35, samples=2 00:20:57.654 iops : min= 442, max= 468, avg=455.00, stdev=18.38, samples=2 00:20:57.654 lat (msec) : 2=0.10%, 4=0.30%, 10=46.16%, 20=4.65%, 50=3.23% 00:20:57.654 lat (msec) : 100=45.56% 00:20:57.654 cpu : usr=0.68%, sys=1.84%, ctx=864, majf=0, minf=1 00:20:57.654 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:20:57.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.654 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.654 issued rwts: total=506,484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.654 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.654 job10: (groupid=0, jobs=1): err= 0: pid=78933: Mon Jul 22 17:25:16 2024 00:20:57.654 read: IOPS=476, BW=59.5MiB/s (62.4MB/s)(62.4MiB/1048msec) 00:20:57.654 slat (usec): min=6, max=277, avg=17.67, stdev=22.45 00:20:57.654 clat (usec): min=909, max=56524, avg=9614.53, stdev=4590.42 00:20:57.654 lat (usec): min=919, max=56533, avg=9632.20, stdev=4589.60 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[ 2474], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:20:57.654 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:20:57.654 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10552], 00:20:57.654 | 99.00th=[49021], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:20:57.654 | 99.99th=[56361] 00:20:57.654 bw ( KiB/s): min=58741, max=67840, per=6.94%, avg=63290.50, stdev=6433.96, samples=2 00:20:57.654 iops : min= 458, max= 530, avg=494.00, stdev=50.91, samples=2 00:20:57.654 write: IOPS=439, BW=55.0MiB/s (57.7MB/s)(57.6MiB/1048msec); 0 zone resets 00:20:57.654 slat (usec): min=7, max=426, avg=22.19, stdev=26.04 00:20:57.654 clat (msec): min=6, max=106, avg=62.11, stdev= 9.93 00:20:57.654 lat (msec): min=6, max=106, avg=62.14, stdev= 9.93 00:20:57.654 clat percentiles (msec): 00:20:57.654 | 1.00th=[ 17], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 59], 00:20:57.654 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:20:57.654 | 70.00th=[ 66], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:20:57.654 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:20:57.654 | 99.99th=[ 108] 00:20:57.654 bw ( KiB/s): min=54784, max=56432, per=5.95%, avg=55608.00, stdev=1165.31, samples=2 00:20:57.654 iops : min= 428, max= 440, avg=434.00, stdev= 8.49, samples=2 00:20:57.654 lat (usec) : 1000=0.21% 00:20:57.654 lat (msec) : 2=0.21%, 4=0.73%, 10=45.52%, 20=5.42%, 50=1.56% 00:20:57.654 lat (msec) : 100=46.04%, 250=0.31% 00:20:57.654 cpu : usr=0.67%, sys=1.24%, ctx=919, majf=0, minf=1 00:20:57.654 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.8%, >=64=0.0% 00:20:57.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.654 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.654 issued rwts: total=499,461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.654 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.654 job11: (groupid=0, jobs=1): err= 0: pid=78934: Mon Jul 22 17:25:16 2024 00:20:57.654 read: IOPS=455, BW=56.9MiB/s (59.6MB/s)(58.9MiB/1035msec) 00:20:57.654 slat (usec): min=8, max=456, avg=26.26, stdev=43.24 00:20:57.654 clat (usec): min=1940, max=38791, avg=9096.84, stdev=2529.98 00:20:57.654 lat (usec): min=1950, max=38808, avg=9123.11, stdev=2528.70 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[ 2606], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8160], 00:20:57.654 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.654 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[13042], 00:20:57.654 | 99.00th=[16909], 99.50th=[17433], 99.90th=[38536], 99.95th=[38536], 00:20:57.654 | 99.99th=[38536] 00:20:57.654 bw ( KiB/s): min=54016, max=66048, per=6.58%, avg=60032.00, stdev=8507.91, samples=2 00:20:57.654 iops : min= 422, max= 516, avg=469.00, stdev=66.47, samples=2 00:20:57.654 write: IOPS=470, BW=58.8MiB/s (61.7MB/s)(60.9MiB/1035msec); 0 zone resets 00:20:57.654 slat (usec): min=9, max=654, avg=31.13, stdev=51.16 00:20:57.654 clat (usec): min=8735, max=84858, avg=59043.13, stdev=7860.60 00:20:57.654 lat (usec): min=8767, max=84891, avg=59074.26, stdev=7861.41 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[23200], 5.00th=[49021], 10.00th=[52167], 20.00th=[55837], 00:20:57.654 | 30.00th=[57410], 40.00th=[58983], 50.00th=[60031], 60.00th=[61080], 00:20:57.654 | 70.00th=[62129], 80.00th=[63701], 90.00th=[65799], 95.00th=[67634], 00:20:57.654 | 99.00th=[77071], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:20:57.654 | 99.99th=[84411] 00:20:57.654 bw ( KiB/s): min=55552, max=61440, per=6.26%, avg=58496.00, stdev=4163.44, samples=2 00:20:57.654 iops : min= 434, max= 480, avg=457.00, stdev=32.53, samples=2 00:20:57.654 lat (msec) : 2=0.21%, 4=0.63%, 10=40.61%, 20=7.93%, 50=2.82% 00:20:57.654 lat (msec) : 100=47.81% 00:20:57.654 cpu : usr=0.77%, sys=1.84%, ctx=852, majf=0, minf=1 00:20:57.654 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.8%, >=64=0.0% 00:20:57.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.654 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.654 issued rwts: total=471,487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.654 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.654 job12: (groupid=0, jobs=1): err= 0: pid=78935: Mon Jul 22 17:25:16 2024 00:20:57.654 read: IOPS=503, BW=62.9MiB/s (66.0MB/s)(65.6MiB/1043msec) 00:20:57.654 slat (usec): min=7, max=749, avg=21.08, stdev=44.72 00:20:57.654 clat (usec): min=1583, max=49608, avg=9090.78, stdev=3262.78 00:20:57.654 lat (usec): min=1593, max=49621, avg=9111.86, stdev=3261.33 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[ 3032], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8225], 00:20:57.654 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.654 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[11469], 00:20:57.654 | 99.00th=[15270], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:20:57.654 | 99.99th=[49546] 00:20:57.654 bw ( KiB/s): min=63361, max=70003, per=7.31%, avg=66682.00, stdev=4696.60, samples=2 00:20:57.654 iops : min= 495, max= 546, avg=520.50, stdev=36.06, samples=2 00:20:57.654 write: IOPS=463, BW=57.9MiB/s (60.7MB/s)(60.4MiB/1043msec); 0 zone resets 00:20:57.654 slat (usec): min=8, max=595, avg=28.10, stdev=49.99 00:20:57.654 clat (usec): min=9268, max=99601, avg=59039.70, stdev=8011.23 00:20:57.654 lat (usec): min=9305, max=99614, avg=59067.80, stdev=8011.65 00:20:57.654 clat percentiles (usec): 00:20:57.654 | 1.00th=[22152], 5.00th=[52691], 10.00th=[54789], 20.00th=[56361], 00:20:57.654 | 30.00th=[56886], 40.00th=[58459], 50.00th=[58983], 60.00th=[60031], 00:20:57.655 | 70.00th=[60556], 80.00th=[62129], 90.00th=[64226], 95.00th=[66847], 00:20:57.655 | 99.00th=[90702], 99.50th=[92799], 99.90th=[99091], 99.95th=[99091], 00:20:57.655 | 99.99th=[99091] 00:20:57.655 bw ( KiB/s): min=57485, max=58762, per=6.22%, avg=58123.50, stdev=902.98, samples=2 00:20:57.655 iops : min= 449, max= 459, avg=454.00, stdev= 7.07, samples=2 00:20:57.655 lat (msec) : 2=0.30%, 4=0.60%, 10=46.83%, 20=4.46%, 50=1.79% 00:20:57.655 lat (msec) : 100=46.03% 00:20:57.655 cpu : usr=0.58%, sys=1.73%, ctx=949, majf=0, minf=1 00:20:57.655 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:20:57.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.655 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.655 issued rwts: total=525,483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.655 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.655 job13: (groupid=0, jobs=1): err= 0: pid=78936: Mon Jul 22 17:25:16 2024 00:20:57.655 read: IOPS=476, BW=59.6MiB/s (62.5MB/s)(61.8MiB/1036msec) 00:20:57.655 slat (usec): min=6, max=1068, avg=19.61, stdev=56.10 00:20:57.655 clat (usec): min=1591, max=42313, avg=9226.59, stdev=3565.18 00:20:57.655 lat (usec): min=1617, max=42323, avg=9246.20, stdev=3564.25 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[ 4228], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:20:57.655 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:57.655 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10683], 00:20:57.655 | 99.00th=[38536], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:57.655 | 99.99th=[42206] 00:20:57.655 bw ( KiB/s): min=59528, max=65405, per=6.85%, avg=62466.50, stdev=4155.67, samples=2 00:20:57.655 iops : min= 465, max= 510, avg=487.50, stdev=31.82, samples=2 00:20:57.655 write: IOPS=463, BW=57.9MiB/s (60.7MB/s)(60.0MiB/1036msec); 0 zone resets 00:20:57.655 slat (usec): min=7, max=375, avg=23.14, stdev=35.13 00:20:57.655 clat (usec): min=9305, max=91026, avg=59413.05, stdev=7679.96 00:20:57.655 lat (usec): min=9318, max=91040, avg=59436.19, stdev=7680.57 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[23725], 5.00th=[48497], 10.00th=[53740], 20.00th=[56361], 00:20:57.655 | 30.00th=[57934], 40.00th=[58983], 50.00th=[60556], 60.00th=[61080], 00:20:57.655 | 70.00th=[62129], 80.00th=[63177], 90.00th=[65274], 95.00th=[67634], 00:20:57.655 | 99.00th=[82314], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:20:57.655 | 99.99th=[90702] 00:20:57.655 bw ( KiB/s): min=56207, max=59784, per=6.20%, avg=57995.50, stdev=2529.32, samples=2 00:20:57.655 iops : min= 439, max= 467, avg=453.00, stdev=19.80, samples=2 00:20:57.655 lat (msec) : 2=0.10%, 4=0.31%, 10=46.30%, 20=3.90%, 50=2.67% 00:20:57.655 lat (msec) : 100=46.71% 00:20:57.655 cpu : usr=0.58%, sys=1.35%, ctx=948, majf=0, minf=1 00:20:57.655 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.8%, >=64=0.0% 00:20:57.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.655 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.655 issued rwts: total=494,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.655 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.655 job14: (groupid=0, jobs=1): err= 0: pid=78937: Mon Jul 22 17:25:16 2024 00:20:57.655 read: IOPS=428, BW=53.6MiB/s (56.2MB/s)(55.5MiB/1035msec) 00:20:57.655 slat (usec): min=6, max=582, avg=20.13, stdev=41.60 00:20:57.655 clat (usec): min=1986, max=41034, avg=9369.04, stdev=2550.95 00:20:57.655 lat (usec): min=2011, max=41043, avg=9389.17, stdev=2549.53 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[ 4883], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8717], 00:20:57.655 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:20:57.655 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:20:57.655 | 99.00th=[14746], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:20:57.655 | 99.99th=[41157] 00:20:57.655 bw ( KiB/s): min=54674, max=58112, per=6.18%, avg=56393.00, stdev=2431.03, samples=2 00:20:57.655 iops : min= 427, max= 454, avg=440.50, stdev=19.09, samples=2 00:20:57.655 write: IOPS=446, BW=55.8MiB/s (58.5MB/s)(57.8MiB/1035msec); 0 zone resets 00:20:57.655 slat (usec): min=7, max=471, avg=23.86, stdev=42.37 00:20:57.655 clat (usec): min=10915, max=98760, avg=62519.45, stdev=9008.92 00:20:57.655 lat (usec): min=10926, max=98775, avg=62543.31, stdev=9014.34 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[21627], 5.00th=[48497], 10.00th=[57410], 20.00th=[59507], 00:20:57.655 | 30.00th=[61080], 40.00th=[62129], 50.00th=[63177], 60.00th=[64226], 00:20:57.655 | 70.00th=[65274], 80.00th=[66847], 90.00th=[69731], 95.00th=[70779], 00:20:57.655 | 99.00th=[87557], 99.50th=[91751], 99.90th=[99091], 99.95th=[99091], 00:20:57.655 | 99.99th=[99091] 00:20:57.655 bw ( KiB/s): min=54528, max=56463, per=5.94%, avg=55495.50, stdev=1368.25, samples=2 00:20:57.655 iops : min= 426, max= 441, avg=433.50, stdev=10.61, samples=2 00:20:57.655 lat (msec) : 2=0.11%, 4=0.22%, 10=43.71%, 20=5.08%, 50=2.65% 00:20:57.655 lat (msec) : 100=48.23% 00:20:57.655 cpu : usr=0.39%, sys=1.45%, ctx=857, majf=0, minf=1 00:20:57.655 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.6%, >=64=0.0% 00:20:57.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.655 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.655 issued rwts: total=444,462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.655 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.655 job15: (groupid=0, jobs=1): err= 0: pid=78938: Mon Jul 22 17:25:16 2024 00:20:57.655 read: IOPS=437, BW=54.7MiB/s (57.4MB/s)(57.5MiB/1051msec) 00:20:57.655 slat (usec): min=7, max=466, avg=22.43, stdev=31.99 00:20:57.655 clat (usec): min=879, max=17660, avg=8756.56, stdev=1656.80 00:20:57.655 lat (usec): min=896, max=17670, avg=8779.00, stdev=1655.83 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 7635], 20.00th=[ 8029], 00:20:57.655 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:20:57.655 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10683], 00:20:57.655 | 99.00th=[15664], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:20:57.655 | 99.99th=[17695] 00:20:57.655 bw ( KiB/s): min=55696, max=61952, per=6.45%, avg=58824.00, stdev=4423.66, samples=2 00:20:57.655 iops : min= 435, max= 484, avg=459.50, stdev=34.65, samples=2 00:20:57.655 write: IOPS=468, BW=58.5MiB/s (61.4MB/s)(61.5MiB/1051msec); 0 zone resets 00:20:57.655 slat (usec): min=8, max=259, avg=26.21, stdev=26.53 00:20:57.655 clat (usec): min=1489, max=98497, avg=59976.82, stdev=11483.67 00:20:57.655 lat (usec): min=1573, max=98548, avg=60003.03, stdev=11483.42 00:20:57.655 clat percentiles (usec): 00:20:57.655 | 1.00th=[ 3785], 5.00th=[46924], 10.00th=[53740], 20.00th=[56886], 00:20:57.655 | 30.00th=[58459], 40.00th=[59507], 50.00th=[60556], 60.00th=[61604], 00:20:57.655 | 70.00th=[62653], 80.00th=[64226], 90.00th=[67634], 95.00th=[74974], 00:20:57.655 | 99.00th=[91751], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:20:57.655 | 99.99th=[98042] 00:20:57.655 bw ( KiB/s): min=56832, max=61061, per=6.31%, avg=58946.50, stdev=2990.35, samples=2 00:20:57.655 iops : min= 444, max= 477, avg=460.50, stdev=23.33, samples=2 00:20:57.655 lat (usec) : 1000=0.21% 00:20:57.655 lat (msec) : 2=0.11%, 4=0.42%, 10=43.49%, 20=5.36%, 50=1.79% 00:20:57.655 lat (msec) : 100=48.63% 00:20:57.655 cpu : usr=0.48%, sys=2.00%, ctx=833, majf=0, minf=1 00:20:57.655 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.7%, >=64=0.0% 00:20:57.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.655 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:20:57.655 issued rwts: total=460,492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.655 latency : target=0, window=0, percentile=100.00%, depth=32 00:20:57.655 00:20:57.655 Run status group 0 (all jobs): 00:20:57.655 READ: bw=891MiB/s (934MB/s), 51.4MiB/s-62.9MiB/s (53.9MB/s-66.0MB/s), io=936MiB (982MB), run=1031-1051msec 00:20:57.655 WRITE: bw=913MiB/s (957MB/s), 55.0MiB/s-58.8MiB/s (57.7MB/s-61.7MB/s), io=959MiB (1006MB), run=1031-1051msec 00:20:57.655 00:20:57.655 Disk stats (read/write): 00:20:57.655 sda: ios=467/421, merge=0/0, ticks=3639/24929, in_queue=28568, util=76.58% 00:20:57.655 sdb: ios=450/435, merge=0/0, ticks=3639/25550, in_queue=29189, util=78.77% 00:20:57.655 sdc: ios=489/401, merge=0/0, ticks=4061/24616, in_queue=28678, util=78.76% 00:20:57.655 sde: ios=468/419, merge=0/0, ticks=3784/24936, in_queue=28721, util=79.47% 00:20:57.655 sdd: ios=478/420, merge=0/0, ticks=3764/24727, in_queue=28492, util=79.76% 00:20:57.655 sdf: ios=507/426, merge=0/0, ticks=4097/24924, in_queue=29021, util=81.38% 00:20:57.655 sdg: ios=430/414, merge=0/0, ticks=3967/25122, in_queue=29090, util=80.79% 00:20:57.655 sdh: ios=401/420, merge=0/0, ticks=3524/25162, in_queue=28687, util=82.61% 00:20:57.655 sdi: ios=405/432, merge=0/0, ticks=3382/25502, in_queue=28885, util=83.56% 00:20:57.655 sdj: ios=474/417, merge=0/0, ticks=4155/24261, in_queue=28417, util=83.30% 00:20:57.655 sdk: ios=456/407, merge=0/0, ticks=4154/24858, in_queue=29012, util=85.49% 00:20:57.655 sdl: ios=436/417, merge=0/0, ticks=3882/24640, in_queue=28523, util=85.03% 00:20:57.655 sdm: ios=492/421, merge=0/0, ticks=4312/24378, in_queue=28691, util=85.91% 00:20:57.655 sdn: ios=452/419, merge=0/0, ticks=4004/24727, in_queue=28731, util=86.80% 00:20:57.655 sdo: ios=406/402, merge=0/0, ticks=3705/24951, in_queue=28657, util=86.79% 00:20:57.655 sdp: ios=434/432, merge=0/0, ticks=3771/25454, in_queue=29226, util=90.07% 00:20:57.655 [2024-07-22 17:25:16.433366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.655 Cleaning up iSCSI connection 00:20:57.655 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:20:57.655 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:57.655 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:58.222 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:20:58.222 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:20:58.222 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:20:58.222 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.222 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:20:58.223 17:25:16 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 78406 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 78406 ']' 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 78406 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78406 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:01.507 killing process with pid 78406 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78406' 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 78406 00:21:01.507 17:25:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 78406 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 78441 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 78441 ']' 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 78441 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78441 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:21:04.038 killing process with pid 78441 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78441' 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 78441 00:21:04.038 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 78441 00:21:04.039 17:25:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='131906 00:21:22.113 133680 00:21:22.113 135386 00:21:22.113 132775' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='131906 00:21:22.113 133680 00:21:22.113 135386 00:21:22.113 132775' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:21:22.113 entries numbers from trace record are: 131906 133680 135386 132775 00:21:22.113 entries numbers from trace tool are: 131906 133680 135386 132775 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 131906 133680 135386 132775 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 131906 133680 135386 132775 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 131906 -le 4096 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 131906 -ne 131906 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 133680 -le 4096 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 133680 -ne 133680 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 135386 -le 4096 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 135386 -ne 135386 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 132775 -le 4096 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 132775 -ne 132775 ']' 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:21:22.113 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:22.113 00:21:22.114 real 0m28.944s 00:21:22.114 user 1m9.928s 00:21:22.114 sys 0m4.155s 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.114 ************************************ 00:21:22.114 END TEST iscsi_tgt_trace_record 00:21:22.114 ************************************ 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:21:22.114 17:25:38 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:21:22.114 17:25:38 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:21:22.114 17:25:38 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:22.114 17:25:38 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.114 17:25:38 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:22.114 ************************************ 00:21:22.114 START TEST iscsi_tgt_login_redirection 00:21:22.114 ************************************ 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:21:22.114 * Looking for test storage... 00:21:22.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=79368 00:21:22.114 Process pid: 79368 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 79368' 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=79369 00:21:22.114 Process pid: 79369 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 79369' 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 79368 /var/tmp/spdk0.sock 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 79368 ']' 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:21:22.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.114 17:25:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:22.114 [2024-07-22 17:25:38.893955] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:22.114 [2024-07-22 17:25:38.894147] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.114 [2024-07-22 17:25:38.894698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:22.114 [2024-07-22 17:25:38.894879] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:22.114 [2024-07-22 17:25:39.073722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.114 [2024-07-22 17:25:39.076532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.114 [2024-07-22 17:25:39.324263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.114 [2024-07-22 17:25:39.373434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.114 17:25:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.114 17:25:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:21:22.114 17:25:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:21:22.114 17:25:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:21:22.373 iscsi_tgt_1 is listening. 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 79369 /var/tmp/spdk1.sock 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 79369 ']' 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.373 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:22.631 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.631 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:21:22.631 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:21:22.889 17:25:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:21:23.825 iscsi_tgt_2 is listening. 00:21:23.825 17:25:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:21:23.825 17:25:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:21:23.825 17:25:42 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.825 17:25:42 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:24.084 17:25:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:24.342 17:25:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:21:24.602 17:25:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:21:24.861 Null0 00:21:24.861 17:25:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:21:25.118 17:25:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:25.376 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:21:25.634 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:21:25.634 Null0 00:21:25.892 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:26.149 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:26.149 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:26.149 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:26.149 [2024-07-22 17:25:44.910732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=79484 00:21:26.149 FIO pid: 79484 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 79484' 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:21:26.149 17:25:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:26.149 [global] 00:21:26.149 thread=1 00:21:26.149 invalidate=1 00:21:26.149 rw=randrw 00:21:26.149 time_based=1 00:21:26.149 runtime=15 00:21:26.149 ioengine=libaio 00:21:26.149 direct=1 00:21:26.149 bs=512 00:21:26.149 iodepth=1 00:21:26.149 norandommap=1 00:21:26.149 numjobs=1 00:21:26.149 00:21:26.149 [job0] 00:21:26.149 filename=/dev/sda 00:21:26.149 queue_depth set to 113 (sda) 00:21:26.149 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:26.149 fio-3.35 00:21:26.149 Starting 1 thread 00:21:26.149 [2024-07-22 17:25:45.085325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:26.406 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:21:26.406 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:26.406 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:21:26.664 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:21:26.664 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:21:26.922 17:25:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:21:27.179 17:25:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:21:32.447 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:32.447 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:21:32.447 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:21:32.447 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:21:32.447 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:32.705 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:21:32.705 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:21:32.963 17:25:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:21:33.280 17:25:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:21:38.553 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:21:38.553 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:21:38.553 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:21:38.553 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:21:38.553 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:21:38.811 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:21:38.811 17:25:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 79484 00:21:41.343 [2024-07-22 17:26:00.193618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.343 00:21:41.343 job0: (groupid=0, jobs=1): err= 0: pid=79513: Mon Jul 22 17:26:00 2024 00:21:41.343 read: IOPS=3572, BW=1786KiB/s (1829kB/s)(26.2MiB/15001msec) 00:21:41.343 slat (nsec): min=4386, max=48435, avg=6659.25, stdev=1823.43 00:21:41.343 clat (usec): min=77, max=2009.5k, avg=169.35, stdev=12265.14 00:21:41.343 lat (usec): min=84, max=2009.5k, avg=176.01, stdev=12265.26 00:21:41.343 clat percentiles (usec): 00:21:41.343 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 86], 20.00th=[ 87], 00:21:41.343 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:21:41.343 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 115], 00:21:41.343 | 99.00th=[ 133], 99.50th=[ 143], 99.90th=[ 192], 99.95th=[ 258], 00:21:41.343 | 99.99th=[ 873] 00:21:41.343 bw ( KiB/s): min= 532, max= 2547, per=100.00%, avg=2225.87, stdev=569.52, samples=23 00:21:41.343 iops : min= 1064, max= 5094, avg=4451.74, stdev=1139.05, samples=23 00:21:41.343 write: IOPS=3558, BW=1779KiB/s (1822kB/s)(26.1MiB/15001msec); 0 zone resets 00:21:41.343 slat (nsec): min=4278, max=68806, avg=6529.92, stdev=1883.01 00:21:41.343 clat (usec): min=75, max=759, avg=96.06, stdev=13.32 00:21:41.343 lat (usec): min=85, max=764, avg=102.59, stdev=13.73 00:21:41.343 clat percentiles (usec): 00:21:41.343 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 00:21:41.343 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 94], 00:21:41.343 | 70.00th=[ 98], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 118], 00:21:41.343 | 99.00th=[ 135], 99.50th=[ 145], 99.90th=[ 190], 99.95th=[ 233], 00:21:41.343 | 99.99th=[ 619] 00:21:41.343 bw ( KiB/s): min= 542, max= 2570, per=100.00%, avg=2217.65, stdev=567.69, samples=23 00:21:41.343 iops : min= 1084, max= 5140, avg=4435.30, stdev=1135.39, samples=23 00:21:41.343 lat (usec) : 100=76.45%, 250=23.51%, 500=0.02%, 750=0.01%, 1000=0.01% 00:21:41.343 lat (msec) : 4=0.01%, 10=0.01%, >=2000=0.01% 00:21:41.343 cpu : usr=2.03%, sys=5.97%, ctx=106980, majf=0, minf=1 00:21:41.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.343 issued rwts: total=53593,53383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:41.343 00:21:41.343 Run status group 0 (all jobs): 00:21:41.343 READ: bw=1786KiB/s (1829kB/s), 1786KiB/s-1786KiB/s (1829kB/s-1829kB/s), io=26.2MiB (27.4MB), run=15001-15001msec 00:21:41.343 WRITE: bw=1779KiB/s (1822kB/s), 1779KiB/s-1779KiB/s (1822kB/s-1822kB/s), io=26.1MiB (27.3MB), run=15001-15001msec 00:21:41.343 00:21:41.343 Disk stats (read/write): 00:21:41.343 sda: ios=53100/52857, merge=0/0, ticks=9016/5069, in_queue=14086, util=99.44% 00:21:41.343 Cleaning up iSCSI connection 00:21:41.343 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:21:41.343 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:21:41.343 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:41.343 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:41.343 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:41.343 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:41.343 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 79368 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 79368 ']' 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 79368 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79368 00:21:41.602 killing process with pid 79368 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79368' 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 79368 00:21:41.602 17:26:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 79368 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 79369 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 79369 ']' 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 79369 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79369 00:21:44.131 killing process with pid 79369 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79369' 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 79369 00:21:44.131 17:26:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 79369 00:21:46.029 17:26:04 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:21:46.029 17:26:04 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:46.029 00:21:46.029 real 0m26.280s 00:21:46.029 user 0m50.414s 00:21:46.029 sys 0m5.820s 00:21:46.029 17:26:04 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.029 17:26:04 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:21:46.029 ************************************ 00:21:46.029 END TEST iscsi_tgt_login_redirection 00:21:46.029 ************************************ 00:21:46.029 17:26:04 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:21:46.029 17:26:04 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:21:46.029 17:26:04 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:46.029 17:26:04 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.029 17:26:04 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:46.287 ************************************ 00:21:46.287 START TEST iscsi_tgt_digests 00:21:46.287 ************************************ 00:21:46.287 17:26:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:21:46.287 * Looking for test storage... 00:21:46.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:46.287 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=79814 00:21:46.288 Process pid: 79814 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 79814' 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 79814 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 79814 ']' 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.288 17:26:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:46.288 [2024-07-22 17:26:05.232278] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:46.288 [2024-07-22 17:26:05.232471] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79814 ] 00:21:46.545 [2024-07-22 17:26:05.410687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.803 [2024-07-22 17:26:05.700495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.803 [2024-07-22 17:26:05.700652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.803 [2024-07-22 17:26:05.700734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.803 [2024-07-22 17:26:05.700740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.368 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:47.933 iscsi_tgt is listening. Running tests... 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.933 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:48.191 Malloc0 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.191 17:26:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:21:49.124 17:26:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:49.124 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:21:49.124 17:26:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:21:49.124 17:26:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:21:49.124 17:26:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:21:49.124 iscsiadm: Could not execute operation on all records: invalid parameter' 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:21:49.124 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:21:49.124 ************************************ 00:21:49.124 START TEST iscsi_tgt_digest 00:21:49.124 ************************************ 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:49.124 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:49.124 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:49.124 [2024-07-22 17:26:08.054636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:49.124 17:26:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:21:49.382 [global] 00:21:49.382 thread=1 00:21:49.382 invalidate=1 00:21:49.382 rw=write 00:21:49.382 time_based=1 00:21:49.382 runtime=2 00:21:49.382 ioengine=libaio 00:21:49.382 direct=1 00:21:49.382 bs=512 00:21:49.382 iodepth=1 00:21:49.382 norandommap=1 00:21:49.382 numjobs=1 00:21:49.382 00:21:49.382 [job0] 00:21:49.382 filename=/dev/sda 00:21:49.382 queue_depth set to 113 (sda) 00:21:49.382 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:49.382 fio-3.35 00:21:49.382 Starting 1 thread 00:21:49.382 [2024-07-22 17:26:08.245584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:51.961 [2024-07-22 17:26:10.363159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:51.961 00:21:51.961 job0: (groupid=0, jobs=1): err= 0: pid=79917: Mon Jul 22 17:26:10 2024 00:21:51.961 write: IOPS=7172, BW=3586KiB/s (3672kB/s)(7176KiB/2001msec); 0 zone resets 00:21:51.961 slat (usec): min=5, max=157, avg= 6.96, stdev= 3.55 00:21:51.961 clat (usec): min=2, max=380, avg=130.97, stdev=17.28 00:21:51.961 lat (usec): min=114, max=386, avg=137.94, stdev=17.80 00:21:51.961 clat percentiles (usec): 00:21:51.961 | 1.00th=[ 114], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 119], 00:21:51.961 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 130], 00:21:51.961 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 153], 95.00th=[ 167], 00:21:51.962 | 99.00th=[ 192], 99.50th=[ 202], 99.90th=[ 229], 99.95th=[ 247], 00:21:51.962 | 99.99th=[ 359] 00:21:51.962 bw ( KiB/s): min= 3687, max= 3776, per=100.00%, avg=3732.33, stdev=44.52, samples=3 00:21:51.962 iops : min= 7374, max= 7552, avg=7464.67, stdev=89.05, samples=3 00:21:51.962 lat (usec) : 4=0.01%, 50=0.03%, 100=0.06%, 250=99.87%, 500=0.04% 00:21:51.962 cpu : usr=3.15%, sys=6.50%, ctx=14381, majf=0, minf=1 00:21:51.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.962 issued rwts: total=0,14352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.962 00:21:51.962 Run status group 0 (all jobs): 00:21:51.962 WRITE: bw=3586KiB/s (3672kB/s), 3586KiB/s-3586KiB/s (3672kB/s-3672kB/s), io=7176KiB (7348kB), run=2001-2001msec 00:21:51.962 00:21:51.962 Disk stats (read/write): 00:21:51.962 sda: ios=39/13600, merge=0/0, ticks=12/1729, in_queue=1741, util=95.51% 00:21:51.962 17:26:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:21:51.962 [global] 00:21:51.962 thread=1 00:21:51.962 invalidate=1 00:21:51.962 rw=read 00:21:51.962 time_based=1 00:21:51.962 runtime=2 00:21:51.962 ioengine=libaio 00:21:51.962 direct=1 00:21:51.962 bs=512 00:21:51.962 iodepth=1 00:21:51.962 norandommap=1 00:21:51.962 numjobs=1 00:21:51.962 00:21:51.962 [job0] 00:21:51.962 filename=/dev/sda 00:21:51.962 queue_depth set to 113 (sda) 00:21:51.962 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:51.962 fio-3.35 00:21:51.962 Starting 1 thread 00:21:53.864 00:21:53.865 job0: (groupid=0, jobs=1): err= 0: pid=79967: Mon Jul 22 17:26:12 2024 00:21:53.865 read: IOPS=8219, BW=4110KiB/s (4208kB/s)(8224KiB/2001msec) 00:21:53.865 slat (nsec): min=4163, max=80686, avg=6574.99, stdev=2056.15 00:21:53.865 clat (usec): min=90, max=2240, avg=114.36, stdev=34.75 00:21:53.865 lat (usec): min=96, max=2253, avg=120.93, stdev=35.13 00:21:53.865 clat percentiles (usec): 00:21:53.865 | 1.00th=[ 94], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 99], 00:21:53.865 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 114], 00:21:53.865 | 70.00th=[ 120], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 147], 00:21:53.865 | 99.00th=[ 172], 99.50th=[ 194], 99.90th=[ 408], 99.95th=[ 482], 00:21:53.865 | 99.99th=[ 2245] 00:21:53.865 bw ( KiB/s): min= 3872, max= 4478, per=100.00%, avg=4207.67, stdev=308.24, samples=3 00:21:53.865 iops : min= 7744, max= 8956, avg=8415.33, stdev=616.47, samples=3 00:21:53.865 lat (usec) : 100=23.12%, 250=76.69%, 500=0.15%, 750=0.02% 00:21:53.865 lat (msec) : 2=0.01%, 4=0.01% 00:21:53.865 cpu : usr=2.40%, sys=7.30%, ctx=16447, majf=0, minf=1 00:21:53.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.865 issued rwts: total=16447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:53.865 00:21:53.865 Run status group 0 (all jobs): 00:21:53.865 READ: bw=4110KiB/s (4208kB/s), 4110KiB/s-4110KiB/s (4208kB/s-4208kB/s), io=8224KiB (8421kB), run=2001-2001msec 00:21:53.865 00:21:53.865 Disk stats (read/write): 00:21:53.865 sda: ios=15628/0, merge=0/0, ticks=1757/0, in_queue=1756, util=95.02% 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:21:53.865 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:53.865 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:53.865 iscsiadm: No active sessions. 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:21:53.865 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:54.123 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:54.123 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:54.123 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:21:54.124 [2024-07-22 17:26:12.853923] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:54.124 17:26:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:21:54.124 [global] 00:21:54.124 thread=1 00:21:54.124 invalidate=1 00:21:54.124 rw=write 00:21:54.124 time_based=1 00:21:54.124 runtime=2 00:21:54.124 ioengine=libaio 00:21:54.124 direct=1 00:21:54.124 bs=512 00:21:54.124 iodepth=1 00:21:54.124 norandommap=1 00:21:54.124 numjobs=1 00:21:54.124 00:21:54.124 [job0] 00:21:54.124 filename=/dev/sda 00:21:54.124 queue_depth set to 113 (sda) 00:21:54.382 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:54.382 fio-3.35 00:21:54.382 Starting 1 thread 00:21:54.382 [2024-07-22 17:26:13.141611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:56.311 [2024-07-22 17:26:15.252971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:56.568 00:21:56.568 job0: (groupid=0, jobs=1): err= 0: pid=80042: Mon Jul 22 17:26:15 2024 00:21:56.568 write: IOPS=7926, BW=3963KiB/s (4058kB/s)(7930KiB/2001msec); 0 zone resets 00:21:56.568 slat (usec): min=4, max=444, avg= 6.24, stdev= 4.05 00:21:56.568 clat (usec): min=34, max=2325, avg=119.09, stdev=28.77 00:21:56.568 lat (usec): min=104, max=2331, avg=125.33, stdev=29.18 00:21:56.568 clat percentiles (usec): 00:21:56.568 | 1.00th=[ 106], 5.00th=[ 109], 10.00th=[ 110], 20.00th=[ 112], 00:21:56.568 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:21:56.568 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:21:56.568 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 273], 99.95th=[ 529], 00:21:56.568 | 99.99th=[ 1958] 00:21:56.568 bw ( KiB/s): min= 3910, max= 4015, per=100.00%, avg=3963.33, stdev=52.52, samples=3 00:21:56.568 iops : min= 7820, max= 8030, avg=7926.67, stdev=105.04, samples=3 00:21:56.568 lat (usec) : 50=0.01%, 100=0.01%, 250=99.89%, 500=0.04%, 750=0.03% 00:21:56.568 lat (usec) : 1000=0.01% 00:21:56.568 lat (msec) : 2=0.01%, 4=0.01% 00:21:56.568 cpu : usr=2.55%, sys=5.95%, ctx=15860, majf=0, minf=1 00:21:56.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.568 issued rwts: total=0,15860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:56.568 00:21:56.568 Run status group 0 (all jobs): 00:21:56.569 WRITE: bw=3963KiB/s (4058kB/s), 3963KiB/s-3963KiB/s (4058kB/s-4058kB/s), io=7930KiB (8120kB), run=2001-2001msec 00:21:56.569 00:21:56.569 Disk stats (read/write): 00:21:56.569 sda: ios=48/15048, merge=0/0, ticks=10/1775, in_queue=1786, util=95.37% 00:21:56.569 17:26:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:21:56.569 [global] 00:21:56.569 thread=1 00:21:56.569 invalidate=1 00:21:56.569 rw=read 00:21:56.569 time_based=1 00:21:56.569 runtime=2 00:21:56.569 ioengine=libaio 00:21:56.569 direct=1 00:21:56.569 bs=512 00:21:56.569 iodepth=1 00:21:56.569 norandommap=1 00:21:56.569 numjobs=1 00:21:56.569 00:21:56.569 [job0] 00:21:56.569 filename=/dev/sda 00:21:56.569 queue_depth set to 113 (sda) 00:21:56.569 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:21:56.569 fio-3.35 00:21:56.569 Starting 1 thread 00:21:59.096 00:21:59.096 job0: (groupid=0, jobs=1): err= 0: pid=80095: Mon Jul 22 17:26:17 2024 00:21:59.096 read: IOPS=8401, BW=4201KiB/s (4301kB/s)(8406KiB/2001msec) 00:21:59.096 slat (usec): min=5, max=140, avg= 7.19, stdev= 2.78 00:21:59.096 clat (usec): min=81, max=1719, avg=110.32, stdev=15.74 00:21:59.096 lat (usec): min=100, max=1759, avg=117.51, stdev=16.61 00:21:59.096 clat percentiles (usec): 00:21:59.096 | 1.00th=[ 98], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 103], 00:21:59.096 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:21:59.096 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 128], 00:21:59.096 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 165], 99.95th=[ 192], 00:21:59.096 | 99.99th=[ 392] 00:21:59.096 bw ( KiB/s): min= 4074, max= 4238, per=99.46%, avg=4178.00, stdev=90.42, samples=3 00:21:59.096 iops : min= 8148, max= 8476, avg=8356.00, stdev=180.84, samples=3 00:21:59.096 lat (usec) : 100=6.79%, 250=93.18%, 500=0.02% 00:21:59.096 lat (msec) : 2=0.01% 00:21:59.096 cpu : usr=4.00%, sys=7.50%, ctx=16853, majf=0, minf=1 00:21:59.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.096 issued rwts: total=16811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:59.096 00:21:59.096 Run status group 0 (all jobs): 00:21:59.096 READ: bw=4201KiB/s (4301kB/s), 4201KiB/s-4201KiB/s (4301kB/s-4301kB/s), io=8406KiB (8607kB), run=2001-2001msec 00:21:59.096 00:21:59.096 Disk stats (read/write): 00:21:59.096 sda: ios=15892/0, merge=0/0, ticks=1704/0, in_queue=1704, util=95.03% 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:21:59.096 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:59.096 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:59.096 iscsiadm: No active sessions. 00:21:59.096 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:21:59.097 ************************************ 00:21:59.097 END TEST iscsi_tgt_digest 00:21:59.097 ************************************ 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:21:59.097 00:21:59.097 real 0m9.645s 00:21:59.097 user 0m0.805s 00:21:59.097 sys 0m0.852s 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1142 -- # return 0 00:21:59.097 Cleaning up iSCSI connection 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:59.097 iscsiadm: No matching sessions found 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 79814 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 79814 ']' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 79814 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79814 00:21:59.097 killing process with pid 79814 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79814' 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 79814 00:21:59.097 17:26:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 79814 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:01.634 00:22:01.634 real 0m15.311s 00:22:01.634 user 0m54.055s 00:22:01.634 sys 0m3.642s 00:22:01.634 ************************************ 00:22:01.634 END TEST iscsi_tgt_digests 00:22:01.634 ************************************ 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 17:26:20 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:22:01.634 17:26:20 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:22:01.634 17:26:20 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:01.634 17:26:20 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.634 17:26:20 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 ************************************ 00:22:01.634 START TEST iscsi_tgt_fuzz 00:22:01.634 ************************************ 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:22:01.634 * Looking for test storage... 00:22:01.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 Process iscsipid: 80219 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=80219 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 80219' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 80219 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 80219 ']' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.634 17:26:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.626 17:26:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 iscsi_tgt is listening. Running tests... 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 Malloc0 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.562 17:26:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:22:04.938 17:26:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.938 17:26:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:22:36.994 Fuzzing completed. Shutting down the fuzz application. 00:22:36.994 00:22:36.994 device 0x6110000160c0 stats: Sent 7898 valid opcode PDUs, 71880 invalid opcode PDUs. 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 80219 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 80219 ']' 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 80219 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80219 00:22:36.994 killing process with pid 80219 00:22:36.994 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.995 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.995 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80219' 00:22:36.995 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 80219 00:22:36.995 17:26:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 80219 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:38.896 ************************************ 00:22:38.896 END TEST iscsi_tgt_fuzz 00:22:38.896 ************************************ 00:22:38.896 00:22:38.896 real 0m37.043s 00:22:38.896 user 3m24.103s 00:22:38.896 sys 0m16.665s 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:38.896 17:26:57 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:22:38.896 17:26:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:22:38.896 17:26:57 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:38.896 17:26:57 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.896 17:26:57 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:38.896 ************************************ 00:22:38.896 START TEST iscsi_tgt_multiconnection 00:22:38.896 ************************************ 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:22:38.896 * Looking for test storage... 00:22:38.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.896 iSCSI target launched. pid: 80676 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=80676 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 80676' 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 80676 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 80676 ']' 00:22:38.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.896 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.897 17:26:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.897 [2024-07-22 17:26:57.703603] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:38.897 [2024-07-22 17:26:57.704517] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80676 ] 00:22:39.155 [2024-07-22 17:26:57.905624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.413 [2024-07-22 17:26:58.253428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.980 17:26:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.980 17:26:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:22:39.980 17:26:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:22:39.980 17:26:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:41.355 17:26:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:41.355 17:26:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:41.615 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:22:41.615 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.615 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:41.615 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:22:41.882 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:42.141 Creating an iSCSI target node. 00:22:42.141 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:22:42.141 17:27:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=5d91cf5a-b390-4fbb-83fc-519d382d2df9 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 5d91cf5a-b390-4fbb-83fc-519d382d2df9 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=5d91cf5a-b390-4fbb-83fc-519d382d2df9 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:22:42.398 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:42.656 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:42.656 { 00:22:42.656 "uuid": "5d91cf5a-b390-4fbb-83fc-519d382d2df9", 00:22:42.656 "name": "lvs0", 00:22:42.656 "base_bdev": "Nvme0n1", 00:22:42.656 "total_data_clusters": 5099, 00:22:42.656 "free_clusters": 5099, 00:22:42.656 "block_size": 4096, 00:22:42.656 "cluster_size": 1048576 00:22:42.656 } 00:22:42.656 ]' 00:22:42.656 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5d91cf5a-b390-4fbb-83fc-519d382d2df9") .free_clusters' 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5d91cf5a-b390-4fbb-83fc-519d382d2df9") .cluster_size' 00:22:42.913 5099 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:42.913 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_1 169 00:22:43.171 395eee02-8ec1-4d43-9a2d-1a999857a7af 00:22:43.171 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.171 17:27:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_2 169 00:22:43.428 8f986147-15f5-4611-952b-27f49d91e49b 00:22:43.428 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.428 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_3 169 00:22:43.686 81c79a2c-8f30-4109-a2ef-4aaf132d3882 00:22:43.686 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.686 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_4 169 00:22:43.943 fccff222-7c45-44f0-b0cd-3e0c91f9f42e 00:22:43.943 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:43.944 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_5 169 00:22:44.202 98eaf437-78fe-4ba9-9195-a69747af6e20 00:22:44.202 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.202 17:27:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_6 169 00:22:44.464 3b7f4ebc-52b0-4e8a-9d03-da6cd16001a6 00:22:44.464 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.464 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_7 169 00:22:44.464 28d62a9f-fcb4-4ed1-8711-a540c850cd9d 00:22:44.727 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.727 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_8 169 00:22:44.984 4a5e5873-37b5-4159-ba74-fefd8e520794 00:22:44.984 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.984 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_9 169 00:22:44.984 0e116c10-be5e-4473-8b30-e895d74a9afc 00:22:44.984 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:44.984 17:27:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_10 169 00:22:45.549 b5da14bf-227b-4810-9637-2236f0ca508b 00:22:45.549 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.549 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_11 169 00:22:45.549 20f8252d-f273-4502-8e59-de2f3524bc5a 00:22:45.807 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:45.807 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_12 169 00:22:45.807 ef7b70b7-be3a-4ea0-aecc-55cf4a3d0e03 00:22:46.136 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.136 17:27:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_13 169 00:22:46.136 e21c58b3-8bf5-4b44-b00e-91fca26ad680 00:22:46.136 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.136 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_14 169 00:22:46.395 d6846494-5d41-4a8f-a69c-0fcba421e070 00:22:46.395 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.395 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_15 169 00:22:46.653 55b94659-258a-4531-8c58-29e4ac5b1bcb 00:22:46.653 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:46.653 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_16 169 00:22:47.220 b9baf384-900f-47e7-b117-41a9d9972109 00:22:47.221 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.221 17:27:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_17 169 00:22:47.221 b7a5d8e8-834e-4896-8ee9-de1183921dae 00:22:47.221 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.221 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_18 169 00:22:47.480 5fb5d684-e8a6-4e1a-94d7-6ceb11800523 00:22:47.480 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.481 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_19 169 00:22:47.739 4ca78959-94f1-429b-a6a0-a128204bd81c 00:22:47.739 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.739 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_20 169 00:22:47.996 25b3e514-d79d-401e-b6cd-8b6059bd655c 00:22:47.996 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:47.996 17:27:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_21 169 00:22:48.263 f267bcfc-0b7f-49e4-afa8-c9591e2c09cd 00:22:48.263 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.263 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_22 169 00:22:48.521 d59d82c6-4749-4bf9-914d-73654ce9654c 00:22:48.521 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.521 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_23 169 00:22:48.779 0cbcb6f0-fd0a-4cae-8615-295560dc6981 00:22:48.779 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:48.779 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_24 169 00:22:49.038 18d5e6bc-4cba-4012-b01f-3f3d420a7127 00:22:49.038 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.038 17:27:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_25 169 00:22:49.297 24253da9-8ae4-4151-9a6d-c8ee65524fbc 00:22:49.297 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.297 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_26 169 00:22:49.555 b7f6aab8-739a-4f15-ae0d-27e4b2b8a597 00:22:49.555 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.555 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_27 169 00:22:49.813 2e8bab81-2cc5-4c68-9a07-cdcce5f261cd 00:22:49.813 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:49.813 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_28 169 00:22:50.071 dfc34f5a-c118-4ff7-ac1d-8e4c1e2a42af 00:22:50.071 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.071 17:27:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_29 169 00:22:50.329 7b291f12-589d-46b7-89f6-e30ecbe113e7 00:22:50.329 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.329 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d91cf5a-b390-4fbb-83fc-519d382d2df9 lbd_30 169 00:22:50.588 37875f1f-740b-4007-87bf-73dd59391556 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:22:50.588 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:22:50.846 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:50.846 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:22:50.846 17:27:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:22:51.105 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:51.105 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:22:51.105 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:22:51.364 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:51.364 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:22:51.364 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:22:51.622 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:51.622 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:22:51.622 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:22:51.879 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:51.879 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:22:51.879 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:22:52.136 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:52.136 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:22:52.136 17:27:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:22:52.392 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:52.392 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:22:52.392 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:22:52.648 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:52.648 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:22:52.648 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:22:52.904 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:52.904 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:22:52.904 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:22:53.161 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:53.161 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:22:53.161 17:27:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:22:53.418 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:22:53.676 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:53.676 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:22:53.676 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:22:53.933 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:53.933 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:22:53.933 17:27:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:22:54.191 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:54.191 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:22:54.191 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:22:54.448 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:54.448 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:22:54.448 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:22:54.706 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:54.706 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:22:54.706 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:22:54.963 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:54.963 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:22:54.963 17:27:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:22:55.221 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:55.221 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:22:55.221 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:22:55.479 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:55.479 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:22:55.479 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:22:55.737 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:55.737 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:22:55.737 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:22:55.995 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:55.995 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:22:55.995 17:27:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:22:56.253 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:56.253 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:22:56.253 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:22:56.511 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:56.511 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:22:56.511 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:22:56.768 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:56.768 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:22:56.768 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:22:57.025 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:57.025 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:22:57.025 17:27:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:22:57.282 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:57.282 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:22:57.282 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:22:57.538 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:22:57.538 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:22:57.538 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:22:57.538 17:27:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:22:58.909 Logging into iSCSI target. 00:22:58.909 17:27:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:22:58.909 17:27:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:22:58.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:22:58.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:22:58.910 17:27:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:58.910 [2024-07-22 17:27:17.568240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.583421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.602540] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.639421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.650666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.680060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.746122] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.751780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.774604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.795278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:58.910 [2024-07-22 17:27:17.832614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.168 [2024-07-22 17:27:17.863990] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.168 [2024-07-22 17:27:17.891618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:22:59.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:22:59.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:22:59.169 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-22 17:27:17.910639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:17.931895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:17.968156] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:18.006642] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:18.034859] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:18.068192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:18.082725] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.169 [2024-07-22 17:27:18.103584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.143931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.177614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.192147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.221786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.252401] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.280999] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.308429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 [2024-07-22 17:27:18.339801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.426 tal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:22:59.426 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:59.426 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:59.426 [2024-07-22 17:27:18.357323] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:59.684 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:22:59.684 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:22:59.684 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:22:59.684 Running FIO 00:22:59.684 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:22:59.684 17:27:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:22:59.684 [global] 00:22:59.684 thread=1 00:22:59.684 invalidate=1 00:22:59.684 rw=randrw 00:22:59.684 time_based=1 00:22:59.684 runtime=5 00:22:59.684 ioengine=libaio 00:22:59.684 direct=1 00:22:59.684 bs=131072 00:22:59.684 iodepth=64 00:22:59.684 norandommap=1 00:22:59.684 numjobs=1 00:22:59.684 00:22:59.684 [job0] 00:22:59.684 filename=/dev/sda 00:22:59.684 [job1] 00:22:59.684 filename=/dev/sdb 00:22:59.684 [job2] 00:22:59.684 filename=/dev/sdc 00:22:59.684 [job3] 00:22:59.684 filename=/dev/sdd 00:22:59.684 [job4] 00:22:59.684 filename=/dev/sde 00:22:59.684 [job5] 00:22:59.684 filename=/dev/sdf 00:22:59.684 [job6] 00:22:59.684 filename=/dev/sdg 00:22:59.684 [job7] 00:22:59.684 filename=/dev/sdh 00:22:59.684 [job8] 00:22:59.684 filename=/dev/sdi 00:22:59.684 [job9] 00:22:59.684 filename=/dev/sdj 00:22:59.684 [job10] 00:22:59.684 filename=/dev/sdk 00:22:59.684 [job11] 00:22:59.684 filename=/dev/sdl 00:22:59.684 [job12] 00:22:59.684 filename=/dev/sdm 00:22:59.684 [job13] 00:22:59.684 filename=/dev/sdn 00:22:59.684 [job14] 00:22:59.684 filename=/dev/sdo 00:22:59.684 [job15] 00:22:59.684 filename=/dev/sdp 00:22:59.684 [job16] 00:22:59.684 filename=/dev/sdq 00:22:59.684 [job17] 00:22:59.684 filename=/dev/sdr 00:22:59.684 [job18] 00:22:59.684 filename=/dev/sds 00:22:59.684 [job19] 00:22:59.684 filename=/dev/sdt 00:22:59.684 [job20] 00:22:59.684 filename=/dev/sdu 00:22:59.684 [job21] 00:22:59.684 filename=/dev/sdv 00:22:59.684 [job22] 00:22:59.684 filename=/dev/sdw 00:22:59.684 [job23] 00:22:59.684 filename=/dev/sdx 00:22:59.684 [job24] 00:22:59.684 filename=/dev/sdy 00:22:59.684 [job25] 00:22:59.684 filename=/dev/sdz 00:22:59.684 [job26] 00:22:59.684 filename=/dev/sdaa 00:22:59.684 [job27] 00:22:59.684 filename=/dev/sdab 00:22:59.684 [job28] 00:22:59.684 filename=/dev/sdac 00:22:59.684 [job29] 00:22:59.684 filename=/dev/sdad 00:23:00.250 queue_depth set to 113 (sda) 00:23:00.250 queue_depth set to 113 (sdb) 00:23:00.250 queue_depth set to 113 (sdc) 00:23:00.250 queue_depth set to 113 (sdd) 00:23:00.250 queue_depth set to 113 (sde) 00:23:00.250 queue_depth set to 113 (sdf) 00:23:00.250 queue_depth set to 113 (sdg) 00:23:00.250 queue_depth set to 113 (sdh) 00:23:00.250 queue_depth set to 113 (sdi) 00:23:00.250 queue_depth set to 113 (sdj) 00:23:00.250 queue_depth set to 113 (sdk) 00:23:00.509 queue_depth set to 113 (sdl) 00:23:00.509 queue_depth set to 113 (sdm) 00:23:00.509 queue_depth set to 113 (sdn) 00:23:00.509 queue_depth set to 113 (sdo) 00:23:00.509 queue_depth set to 113 (sdp) 00:23:00.509 queue_depth set to 113 (sdq) 00:23:00.509 queue_depth set to 113 (sdr) 00:23:00.509 queue_depth set to 113 (sds) 00:23:00.509 queue_depth set to 113 (sdt) 00:23:00.509 queue_depth set to 113 (sdu) 00:23:00.509 queue_depth set to 113 (sdv) 00:23:00.855 queue_depth set to 113 (sdw) 00:23:00.855 queue_depth set to 113 (sdx) 00:23:00.855 queue_depth set to 113 (sdy) 00:23:00.855 queue_depth set to 113 (sdz) 00:23:00.855 queue_depth set to 113 (sdaa) 00:23:00.855 queue_depth set to 113 (sdab) 00:23:00.855 queue_depth set to 113 (sdac) 00:23:00.855 queue_depth set to 113 (sdad) 00:23:01.121 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.121 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:23:01.122 fio-3.35 00:23:01.122 Starting 30 threads 00:23:01.122 [2024-07-22 17:27:19.804270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.808647] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.812249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.815748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.819307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.822743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.825102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.827590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.830067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.832413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.835086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.837351] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.839637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.841828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.844054] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.846368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.848700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.850991] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.854066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.856830] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.859533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.862838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.866659] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.870509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.873827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.876471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.879335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.882342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.888255] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:01.122 [2024-07-22 17:27:19.891036] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.016776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.035397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.039246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.042232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.045294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.047860] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.051272] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.054022] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.056545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.058963] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.061472] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.064008] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.066500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.069140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.071778] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.075056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.077599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 [2024-07-22 17:27:26.080177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.689 00:23:07.689 job0: (groupid=0, jobs=1): err= 0: pid=81611: Mon Jul 22 17:27:26 2024 00:23:07.689 read: IOPS=55, BW=7048KiB/s (7217kB/s)(38.1MiB/5539msec) 00:23:07.689 slat (usec): min=8, max=727, avg=35.29, stdev=57.79 00:23:07.689 clat (msec): min=44, max=561, avg=86.22, stdev=55.56 00:23:07.689 lat (msec): min=44, max=561, avg=86.26, stdev=55.56 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 51], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.689 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.689 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 138], 95.00th=[ 201], 00:23:07.689 | 99.00th=[ 236], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:23:07.689 | 99.99th=[ 558] 00:23:07.689 bw ( KiB/s): min= 5120, max=13824, per=3.33%, avg=7750.50, stdev=2766.95, samples=10 00:23:07.689 iops : min= 40, max= 108, avg=60.20, stdev=21.63, samples=10 00:23:07.689 write: IOPS=61, BW=7811KiB/s (7998kB/s)(42.2MiB/5539msec); 0 zone resets 00:23:07.689 slat (usec): min=13, max=466, avg=46.34, stdev=52.72 00:23:07.689 clat (msec): min=251, max=1524, avg=969.22, stdev=179.79 00:23:07.689 lat (msec): min=251, max=1524, avg=969.26, stdev=179.80 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 313], 5.00th=[ 592], 10.00th=[ 709], 20.00th=[ 936], 00:23:07.689 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.689 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1217], 00:23:07.689 | 99.00th=[ 1435], 99.50th=[ 1469], 99.90th=[ 1519], 99.95th=[ 1519], 00:23:07.689 | 99.99th=[ 1519] 00:23:07.689 bw ( KiB/s): min= 256, max= 7936, per=2.79%, avg=6441.00, stdev=2600.26, samples=11 00:23:07.689 iops : min= 2, max= 62, avg=50.00, stdev=20.18, samples=11 00:23:07.689 lat (msec) : 50=0.31%, 100=38.72%, 250=8.09%, 500=1.40%, 750=4.82% 00:23:07.689 lat (msec) : 1000=17.73%, 2000=28.93% 00:23:07.689 cpu : usr=0.16%, sys=0.40%, ctx=433, majf=0, minf=1 00:23:07.689 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:23:07.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.689 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.689 issued rwts: total=305,338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.689 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.689 job1: (groupid=0, jobs=1): err= 0: pid=81612: Mon Jul 22 17:27:26 2024 00:23:07.689 read: IOPS=53, BW=6872KiB/s (7037kB/s)(37.4MiB/5569msec) 00:23:07.689 slat (usec): min=8, max=1470, avg=46.24, stdev=107.04 00:23:07.689 clat (msec): min=14, max=608, avg=84.03, stdev=51.09 00:23:07.689 lat (msec): min=15, max=608, avg=84.08, stdev=51.09 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 46], 5.00th=[ 63], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.689 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.689 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 142], 95.00th=[ 188], 00:23:07.689 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 609], 99.95th=[ 609], 00:23:07.689 | 99.99th=[ 609] 00:23:07.689 bw ( KiB/s): min= 3840, max=13312, per=3.27%, avg=7628.80, stdev=2624.05, samples=10 00:23:07.689 iops : min= 30, max= 104, avg=59.60, stdev=20.50, samples=10 00:23:07.689 write: IOPS=60, BW=7746KiB/s (7932kB/s)(42.1MiB/5569msec); 0 zone resets 00:23:07.689 slat (usec): min=13, max=3465, avg=61.57, stdev=201.19 00:23:07.689 clat (msec): min=264, max=1554, avg=979.89, stdev=180.79 00:23:07.689 lat (msec): min=264, max=1554, avg=979.95, stdev=180.77 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 334], 5.00th=[ 625], 10.00th=[ 743], 20.00th=[ 936], 00:23:07.689 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.689 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1284], 00:23:07.689 | 99.00th=[ 1485], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.689 | 99.99th=[ 1552] 00:23:07.689 bw ( KiB/s): min= 256, max= 7936, per=2.77%, avg=6400.00, stdev=2635.68, samples=11 00:23:07.689 iops : min= 2, max= 62, avg=50.00, stdev=20.59, samples=11 00:23:07.689 lat (msec) : 20=0.16%, 50=0.63%, 100=39.15%, 250=6.29%, 500=1.89% 00:23:07.689 lat (msec) : 750=4.25%, 1000=19.18%, 2000=28.46% 00:23:07.689 cpu : usr=0.16%, sys=0.43%, ctx=434, majf=0, minf=1 00:23:07.689 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:23:07.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.689 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.689 issued rwts: total=299,337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.689 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.689 job2: (groupid=0, jobs=1): err= 0: pid=81613: Mon Jul 22 17:27:26 2024 00:23:07.689 read: IOPS=61, BW=7863KiB/s (8051kB/s)(42.5MiB/5535msec) 00:23:07.689 slat (usec): min=9, max=385, avg=32.77, stdev=25.27 00:23:07.689 clat (msec): min=46, max=570, avg=89.52, stdev=73.28 00:23:07.689 lat (msec): min=46, max=570, avg=89.55, stdev=73.28 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.689 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.689 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 159], 95.00th=[ 220], 00:23:07.689 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.689 | 99.99th=[ 575] 00:23:07.689 bw ( KiB/s): min= 4598, max=12312, per=3.68%, avg=8577.90, stdev=2585.69, samples=10 00:23:07.689 iops : min= 35, max= 96, avg=66.80, stdev=20.32, samples=10 00:23:07.689 write: IOPS=60, BW=7724KiB/s (7909kB/s)(41.8MiB/5535msec); 0 zone resets 00:23:07.689 slat (usec): min=13, max=383, avg=40.62, stdev=30.54 00:23:07.689 clat (msec): min=267, max=1517, avg=967.71, stdev=181.14 00:23:07.689 lat (msec): min=267, max=1517, avg=967.75, stdev=181.14 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 384], 5.00th=[ 584], 10.00th=[ 735], 20.00th=[ 927], 00:23:07.689 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.689 | 70.00th=[ 1011], 80.00th=[ 1036], 90.00th=[ 1062], 95.00th=[ 1267], 00:23:07.689 | 99.00th=[ 1502], 99.50th=[ 1519], 99.90th=[ 1519], 99.95th=[ 1519], 00:23:07.689 | 99.99th=[ 1519] 00:23:07.689 bw ( KiB/s): min= 256, max= 7936, per=2.78%, avg=6422.18, stdev=2650.72, samples=11 00:23:07.689 iops : min= 2, max= 62, avg=50.00, stdev=20.66, samples=11 00:23:07.689 lat (msec) : 50=1.04%, 100=41.99%, 250=5.64%, 500=2.23%, 750=4.60% 00:23:07.689 lat (msec) : 1000=23.00%, 2000=21.51% 00:23:07.689 cpu : usr=0.22%, sys=0.36%, ctx=419, majf=0, minf=1 00:23:07.689 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:23:07.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.689 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.689 issued rwts: total=340,334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.689 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.689 job3: (groupid=0, jobs=1): err= 0: pid=81614: Mon Jul 22 17:27:26 2024 00:23:07.689 read: IOPS=57, BW=7319KiB/s (7495kB/s)(39.6MiB/5544msec) 00:23:07.689 slat (usec): min=7, max=105, avg=29.71, stdev=16.26 00:23:07.689 clat (msec): min=4, max=570, avg=85.29, stdev=72.25 00:23:07.689 lat (msec): min=4, max=570, avg=85.32, stdev=72.25 00:23:07.689 clat percentiles (msec): 00:23:07.689 | 1.00th=[ 13], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.689 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.690 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 124], 95.00th=[ 205], 00:23:07.690 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.690 | 99.99th=[ 575] 00:23:07.690 bw ( KiB/s): min= 4096, max=13568, per=3.43%, avg=7987.20, stdev=2510.60, samples=10 00:23:07.690 iops : min= 32, max= 106, avg=62.40, stdev=19.61, samples=10 00:23:07.690 write: IOPS=59, BW=7665KiB/s (7849kB/s)(41.5MiB/5544msec); 0 zone resets 00:23:07.690 slat (usec): min=11, max=247, avg=36.19, stdev=20.28 00:23:07.690 clat (msec): min=225, max=1598, avg=985.31, stdev=189.17 00:23:07.690 lat (msec): min=225, max=1598, avg=985.35, stdev=189.18 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 321], 5.00th=[ 609], 10.00th=[ 743], 20.00th=[ 961], 00:23:07.690 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1011], 00:23:07.690 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1301], 00:23:07.690 | 99.00th=[ 1552], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:23:07.690 | 99.99th=[ 1603] 00:23:07.690 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1762.09, samples=10 00:23:07.690 iops : min= 16, max= 62, avg=54.80, stdev=13.77, samples=10 00:23:07.690 lat (msec) : 10=0.31%, 20=0.77%, 50=1.08%, 100=40.83%, 250=4.62% 00:23:07.690 lat (msec) : 500=1.69%, 750=4.78%, 1000=17.26%, 2000=28.66% 00:23:07.690 cpu : usr=0.13%, sys=0.41%, ctx=390, majf=0, minf=1 00:23:07.690 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:23:07.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.690 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.690 issued rwts: total=317,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.690 job4: (groupid=0, jobs=1): err= 0: pid=81629: Mon Jul 22 17:27:26 2024 00:23:07.690 read: IOPS=59, BW=7646KiB/s (7829kB/s)(41.5MiB/5558msec) 00:23:07.690 slat (usec): min=11, max=2154, avg=45.42, stdev=151.90 00:23:07.690 clat (msec): min=50, max=601, avg=88.55, stdev=56.87 00:23:07.690 lat (msec): min=50, max=601, avg=88.59, stdev=56.87 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.690 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.690 | 70.00th=[ 72], 80.00th=[ 96], 90.00th=[ 163], 95.00th=[ 188], 00:23:07.690 | 99.00th=[ 275], 99.50th=[ 575], 99.90th=[ 600], 99.95th=[ 600], 00:23:07.690 | 99.99th=[ 600] 00:23:07.690 bw ( KiB/s): min= 5888, max=17408, per=3.63%, avg=8448.00, stdev=3472.55, samples=10 00:23:07.690 iops : min= 46, max= 136, avg=66.00, stdev=27.13, samples=10 00:23:07.690 write: IOPS=60, BW=7738KiB/s (7924kB/s)(42.0MiB/5558msec); 0 zone resets 00:23:07.690 slat (usec): min=12, max=3343, avg=58.43, stdev=207.53 00:23:07.690 clat (msec): min=270, max=1548, avg=968.62, stdev=186.17 00:23:07.690 lat (msec): min=273, max=1548, avg=968.68, stdev=186.13 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 338], 5.00th=[ 609], 10.00th=[ 735], 20.00th=[ 877], 00:23:07.690 | 30.00th=[ 961], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1011], 00:23:07.690 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1284], 00:23:07.690 | 99.00th=[ 1519], 99.50th=[ 1552], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.690 | 99.99th=[ 1552] 00:23:07.690 bw ( KiB/s): min= 2048, max= 7936, per=3.05%, avg=7040.00, stdev=1774.65, samples=10 00:23:07.690 iops : min= 16, max= 62, avg=55.00, stdev=13.86, samples=10 00:23:07.690 lat (msec) : 100=39.97%, 250=8.98%, 500=1.65%, 750=4.34%, 1000=18.86% 00:23:07.690 lat (msec) : 2000=26.20% 00:23:07.690 cpu : usr=0.18%, sys=0.38%, ctx=432, majf=0, minf=1 00:23:07.690 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:23:07.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.690 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.690 issued rwts: total=332,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.690 job5: (groupid=0, jobs=1): err= 0: pid=81638: Mon Jul 22 17:27:26 2024 00:23:07.690 read: IOPS=63, BW=8151KiB/s (8347kB/s)(44.2MiB/5559msec) 00:23:07.690 slat (usec): min=8, max=627, avg=33.86, stdev=41.14 00:23:07.690 clat (msec): min=19, max=578, avg=82.86, stdev=60.78 00:23:07.690 lat (msec): min=19, max=578, avg=82.90, stdev=60.78 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 29], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 64], 00:23:07.690 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.690 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 125], 95.00th=[ 178], 00:23:07.690 | 99.00th=[ 321], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.690 | 99.99th=[ 575] 00:23:07.690 bw ( KiB/s): min= 256, max=12800, per=3.51%, avg=8170.91, stdev=3645.58, samples=11 00:23:07.690 iops : min= 2, max= 100, avg=63.82, stdev=28.46, samples=11 00:23:07.690 write: IOPS=60, BW=7691KiB/s (7875kB/s)(41.8MiB/5559msec); 0 zone resets 00:23:07.690 slat (usec): min=9, max=481, avg=39.80, stdev=39.74 00:23:07.690 clat (msec): min=281, max=1557, avg=975.46, stdev=180.74 00:23:07.690 lat (msec): min=281, max=1557, avg=975.50, stdev=180.74 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 342], 5.00th=[ 609], 10.00th=[ 735], 20.00th=[ 953], 00:23:07.690 | 30.00th=[ 978], 40.00th=[ 986], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.690 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1053], 95.00th=[ 1318], 00:23:07.690 | 99.00th=[ 1502], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.690 | 99.99th=[ 1552] 00:23:07.690 bw ( KiB/s): min= 2052, max= 7936, per=3.04%, avg=7014.80, stdev=1769.09, samples=10 00:23:07.690 iops : min= 16, max= 62, avg=54.80, stdev=13.83, samples=10 00:23:07.690 lat (msec) : 20=0.29%, 50=1.60%, 100=43.02%, 250=5.23%, 500=2.03% 00:23:07.690 lat (msec) : 750=4.22%, 1000=23.26%, 2000=20.35% 00:23:07.690 cpu : usr=0.20%, sys=0.41%, ctx=403, majf=0, minf=1 00:23:07.690 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:23:07.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.690 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.690 issued rwts: total=354,334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.690 job6: (groupid=0, jobs=1): err= 0: pid=81665: Mon Jul 22 17:27:26 2024 00:23:07.690 read: IOPS=60, BW=7704KiB/s (7888kB/s)(41.6MiB/5533msec) 00:23:07.690 slat (usec): min=8, max=100, avg=33.73, stdev=17.85 00:23:07.690 clat (msec): min=44, max=249, avg=81.83, stdev=40.25 00:23:07.690 lat (msec): min=44, max=249, avg=81.86, stdev=40.24 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.690 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.690 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 144], 95.00th=[ 184], 00:23:07.690 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:23:07.690 | 99.99th=[ 249] 00:23:07.690 bw ( KiB/s): min= 4608, max=12800, per=3.66%, avg=8522.70, stdev=2569.72, samples=10 00:23:07.690 iops : min= 36, max= 100, avg=66.40, stdev=19.96, samples=10 00:23:07.690 write: IOPS=61, BW=7866KiB/s (8054kB/s)(42.5MiB/5533msec); 0 zone resets 00:23:07.690 slat (usec): min=11, max=332, avg=39.25, stdev=30.84 00:23:07.690 clat (msec): min=249, max=1469, avg=959.55, stdev=179.42 00:23:07.690 lat (msec): min=250, max=1469, avg=959.58, stdev=179.42 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 334], 5.00th=[ 558], 10.00th=[ 684], 20.00th=[ 919], 00:23:07.690 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1011], 00:23:07.690 | 70.00th=[ 1020], 80.00th=[ 1028], 90.00th=[ 1053], 95.00th=[ 1217], 00:23:07.690 | 99.00th=[ 1435], 99.50th=[ 1469], 99.90th=[ 1469], 99.95th=[ 1469], 00:23:07.690 | 99.99th=[ 1469] 00:23:07.690 bw ( KiB/s): min= 256, max= 7936, per=2.79%, avg=6445.09, stdev=2607.29, samples=11 00:23:07.690 iops : min= 2, max= 62, avg=50.18, stdev=20.31, samples=11 00:23:07.690 lat (msec) : 50=0.30%, 100=41.90%, 250=7.43%, 500=1.19%, 750=4.46% 00:23:07.690 lat (msec) : 1000=19.32%, 2000=25.41% 00:23:07.690 cpu : usr=0.14%, sys=0.42%, ctx=405, majf=0, minf=1 00:23:07.690 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:23:07.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.690 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.690 issued rwts: total=333,340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.690 job7: (groupid=0, jobs=1): err= 0: pid=81671: Mon Jul 22 17:27:26 2024 00:23:07.690 read: IOPS=56, BW=7237KiB/s (7410kB/s)(39.2MiB/5554msec) 00:23:07.690 slat (nsec): min=8041, max=89241, avg=27857.29, stdev=14650.99 00:23:07.690 clat (msec): min=11, max=605, avg=86.20, stdev=70.66 00:23:07.690 lat (msec): min=11, max=605, avg=86.23, stdev=70.66 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 21], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.690 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.690 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 133], 95.00th=[ 215], 00:23:07.690 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:23:07.690 | 99.99th=[ 609] 00:23:07.690 bw ( KiB/s): min= 256, max=14848, per=3.10%, avg=7214.55, stdev=3546.90, samples=11 00:23:07.690 iops : min= 2, max= 116, avg=56.36, stdev=27.71, samples=11 00:23:07.690 write: IOPS=59, BW=7674KiB/s (7859kB/s)(41.6MiB/5554msec); 0 zone resets 00:23:07.690 slat (usec): min=8, max=245, avg=35.10, stdev=18.92 00:23:07.690 clat (msec): min=246, max=1617, avg=984.29, stdev=183.19 00:23:07.690 lat (msec): min=246, max=1617, avg=984.33, stdev=183.19 00:23:07.690 clat percentiles (msec): 00:23:07.690 | 1.00th=[ 300], 5.00th=[ 609], 10.00th=[ 751], 20.00th=[ 961], 00:23:07.690 | 30.00th=[ 986], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1011], 00:23:07.690 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1070], 95.00th=[ 1301], 00:23:07.690 | 99.00th=[ 1502], 99.50th=[ 1586], 99.90th=[ 1620], 99.95th=[ 1620], 00:23:07.690 | 99.99th=[ 1620] 00:23:07.690 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1774.44, samples=10 00:23:07.690 iops : min= 16, max= 62, avg=54.80, stdev=13.86, samples=10 00:23:07.690 lat (msec) : 20=0.31%, 50=1.70%, 100=40.19%, 250=5.10%, 500=1.85% 00:23:07.690 lat (msec) : 750=4.48%, 1000=18.24%, 2000=28.13% 00:23:07.690 cpu : usr=0.16%, sys=0.36%, ctx=399, majf=0, minf=1 00:23:07.690 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:23:07.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.691 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.691 issued rwts: total=314,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.691 job8: (groupid=0, jobs=1): err= 0: pid=81672: Mon Jul 22 17:27:26 2024 00:23:07.691 read: IOPS=53, BW=6911KiB/s (7077kB/s)(37.4MiB/5538msec) 00:23:07.691 slat (usec): min=7, max=949, avg=41.46, stdev=82.84 00:23:07.691 clat (msec): min=49, max=598, avg=92.44, stdev=66.39 00:23:07.691 lat (msec): min=49, max=598, avg=92.48, stdev=66.38 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 66], 00:23:07.691 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.691 | 70.00th=[ 71], 80.00th=[ 97], 90.00th=[ 169], 95.00th=[ 222], 00:23:07.691 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 600], 99.95th=[ 600], 00:23:07.691 | 99.99th=[ 600] 00:23:07.691 bw ( KiB/s): min= 4608, max=15134, per=3.25%, avg=7576.30, stdev=3028.72, samples=10 00:23:07.691 iops : min= 36, max= 118, avg=58.90, stdev=23.66, samples=10 00:23:07.691 write: IOPS=60, BW=7743KiB/s (7929kB/s)(41.9MiB/5538msec); 0 zone resets 00:23:07.691 slat (usec): min=7, max=1207, avg=54.92, stdev=114.42 00:23:07.691 clat (msec): min=256, max=1513, avg=973.63, stdev=185.91 00:23:07.691 lat (msec): min=256, max=1513, avg=973.69, stdev=185.92 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 326], 5.00th=[ 584], 10.00th=[ 743], 20.00th=[ 902], 00:23:07.691 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.691 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1250], 00:23:07.691 | 99.00th=[ 1485], 99.50th=[ 1519], 99.90th=[ 1519], 99.95th=[ 1519], 00:23:07.691 | 99.99th=[ 1519] 00:23:07.691 bw ( KiB/s): min= 2052, max= 7936, per=3.04%, avg=7035.90, stdev=1768.46, samples=10 00:23:07.691 iops : min= 16, max= 62, avg=54.70, stdev=13.78, samples=10 00:23:07.691 lat (msec) : 50=0.16%, 100=37.85%, 250=8.68%, 500=1.26%, 750=4.73% 00:23:07.691 lat (msec) : 1000=21.29%, 2000=26.03% 00:23:07.691 cpu : usr=0.18%, sys=0.34%, ctx=434, majf=0, minf=1 00:23:07.691 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:23:07.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.691 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.691 issued rwts: total=299,335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.691 job9: (groupid=0, jobs=1): err= 0: pid=81685: Mon Jul 22 17:27:26 2024 00:23:07.691 read: IOPS=66, BW=8477KiB/s (8680kB/s)(46.0MiB/5557msec) 00:23:07.691 slat (usec): min=7, max=737, avg=30.28, stdev=39.70 00:23:07.691 clat (msec): min=18, max=593, avg=89.01, stdev=63.27 00:23:07.691 lat (msec): min=18, max=593, avg=89.04, stdev=63.27 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 28], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.691 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:23:07.691 | 70.00th=[ 72], 80.00th=[ 92], 90.00th=[ 150], 95.00th=[ 211], 00:23:07.691 | 99.00th=[ 292], 99.50th=[ 567], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.691 | 99.99th=[ 592] 00:23:07.691 bw ( KiB/s): min= 256, max=20224, per=3.64%, avg=8494.55, stdev=5024.15, samples=11 00:23:07.691 iops : min= 2, max= 158, avg=66.36, stdev=39.25, samples=11 00:23:07.691 write: IOPS=60, BW=7693KiB/s (7878kB/s)(41.8MiB/5557msec); 0 zone resets 00:23:07.691 slat (usec): min=13, max=819, avg=38.22, stdev=45.63 00:23:07.691 clat (msec): min=243, max=1557, avg=964.65, stdev=183.68 00:23:07.691 lat (msec): min=244, max=1557, avg=964.69, stdev=183.67 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 305], 5.00th=[ 600], 10.00th=[ 751], 20.00th=[ 869], 00:23:07.691 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1011], 00:23:07.691 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1070], 95.00th=[ 1217], 00:23:07.691 | 99.00th=[ 1536], 99.50th=[ 1552], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.691 | 99.99th=[ 1552] 00:23:07.691 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1770.33, samples=10 00:23:07.691 iops : min= 16, max= 62, avg=54.80, stdev=13.83, samples=10 00:23:07.691 lat (msec) : 20=0.28%, 50=1.28%, 100=41.60%, 250=7.98%, 500=1.99% 00:23:07.691 lat (msec) : 750=4.13%, 1000=20.09%, 2000=22.65% 00:23:07.691 cpu : usr=0.22%, sys=0.34%, ctx=453, majf=0, minf=1 00:23:07.691 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:23:07.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.691 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.691 issued rwts: total=368,334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.691 job10: (groupid=0, jobs=1): err= 0: pid=81714: Mon Jul 22 17:27:26 2024 00:23:07.691 read: IOPS=66, BW=8484KiB/s (8688kB/s)(45.9MiB/5537msec) 00:23:07.691 slat (usec): min=9, max=370, avg=29.57, stdev=29.52 00:23:07.691 clat (msec): min=47, max=587, avg=90.59, stdev=72.90 00:23:07.691 lat (msec): min=47, max=587, avg=90.62, stdev=72.90 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 65], 00:23:07.691 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.691 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 159], 95.00th=[ 232], 00:23:07.691 | 99.00th=[ 542], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.691 | 99.99th=[ 592] 00:23:07.691 bw ( KiB/s): min= 256, max=13595, per=3.61%, avg=8423.00, stdev=3858.87, samples=11 00:23:07.691 iops : min= 2, max= 106, avg=65.64, stdev=30.14, samples=11 00:23:07.691 write: IOPS=60, BW=7698KiB/s (7883kB/s)(41.6MiB/5537msec); 0 zone resets 00:23:07.691 slat (usec): min=13, max=602, avg=39.25, stdev=49.30 00:23:07.691 clat (msec): min=268, max=1566, avg=961.72, stdev=188.82 00:23:07.691 lat (msec): min=268, max=1566, avg=961.76, stdev=188.83 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 347], 5.00th=[ 558], 10.00th=[ 735], 20.00th=[ 885], 00:23:07.691 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.691 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1062], 95.00th=[ 1284], 00:23:07.691 | 99.00th=[ 1519], 99.50th=[ 1552], 99.90th=[ 1569], 99.95th=[ 1569], 00:23:07.691 | 99.99th=[ 1569] 00:23:07.691 bw ( KiB/s): min= 2052, max= 7936, per=3.04%, avg=7035.70, stdev=1775.62, samples=10 00:23:07.691 iops : min= 16, max= 62, avg=54.80, stdev=13.84, samples=10 00:23:07.691 lat (msec) : 50=0.86%, 100=43.43%, 250=6.43%, 500=2.14%, 750=4.71% 00:23:07.691 lat (msec) : 1000=23.43%, 2000=19.00% 00:23:07.691 cpu : usr=0.31%, sys=0.20%, ctx=422, majf=0, minf=1 00:23:07.691 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:23:07.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.691 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.691 issued rwts: total=367,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.691 job11: (groupid=0, jobs=1): err= 0: pid=81768: Mon Jul 22 17:27:26 2024 00:23:07.691 read: IOPS=58, BW=7508KiB/s (7688kB/s)(40.6MiB/5541msec) 00:23:07.691 slat (usec): min=7, max=1174, avg=33.27, stdev=70.42 00:23:07.691 clat (msec): min=47, max=593, avg=93.63, stdev=67.28 00:23:07.691 lat (msec): min=47, max=593, avg=93.66, stdev=67.28 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 65], 00:23:07.691 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.691 | 70.00th=[ 71], 80.00th=[ 95], 90.00th=[ 188], 95.00th=[ 222], 00:23:07.691 | 99.00th=[ 271], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.691 | 99.99th=[ 592] 00:23:07.691 bw ( KiB/s): min= 256, max=16896, per=3.21%, avg=7490.64, stdev=4324.63, samples=11 00:23:07.691 iops : min= 2, max= 132, avg=58.27, stdev=33.91, samples=11 00:23:07.691 write: IOPS=60, BW=7739KiB/s (7924kB/s)(41.9MiB/5541msec); 0 zone resets 00:23:07.691 slat (usec): min=12, max=676, avg=38.78, stdev=47.96 00:23:07.691 clat (msec): min=271, max=1588, avg=965.86, stdev=193.84 00:23:07.691 lat (msec): min=271, max=1588, avg=965.90, stdev=193.84 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 334], 5.00th=[ 584], 10.00th=[ 718], 20.00th=[ 877], 00:23:07.691 | 30.00th=[ 944], 40.00th=[ 969], 50.00th=[ 995], 60.00th=[ 1011], 00:23:07.691 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1099], 95.00th=[ 1318], 00:23:07.691 | 99.00th=[ 1502], 99.50th=[ 1552], 99.90th=[ 1586], 99.95th=[ 1586], 00:23:07.691 | 99.99th=[ 1586] 00:23:07.691 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7035.50, stdev=1765.35, samples=10 00:23:07.691 iops : min= 16, max= 62, avg=54.70, stdev=13.74, samples=10 00:23:07.691 lat (msec) : 50=1.21%, 100=38.18%, 250=8.79%, 500=1.82%, 750=5.61% 00:23:07.691 lat (msec) : 1000=20.91%, 2000=23.48% 00:23:07.691 cpu : usr=0.16%, sys=0.34%, ctx=423, majf=0, minf=1 00:23:07.691 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:23:07.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.691 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.691 issued rwts: total=325,335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.691 job12: (groupid=0, jobs=1): err= 0: pid=81793: Mon Jul 22 17:27:26 2024 00:23:07.691 read: IOPS=69, BW=8917KiB/s (9131kB/s)(48.2MiB/5541msec) 00:23:07.691 slat (usec): min=8, max=231, avg=28.98, stdev=18.23 00:23:07.691 clat (msec): min=45, max=580, avg=91.03, stdev=69.42 00:23:07.691 lat (msec): min=45, max=581, avg=91.06, stdev=69.42 00:23:07.691 clat percentiles (msec): 00:23:07.691 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.691 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.691 | 70.00th=[ 71], 80.00th=[ 83], 90.00th=[ 165], 95.00th=[ 243], 00:23:07.691 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:23:07.691 | 99.99th=[ 584] 00:23:07.691 bw ( KiB/s): min= 5609, max=16416, per=4.20%, avg=9778.30, stdev=3049.92, samples=10 00:23:07.691 iops : min= 43, max= 128, avg=76.20, stdev=23.92, samples=10 00:23:07.691 write: IOPS=60, BW=7692KiB/s (7877kB/s)(41.6MiB/5541msec); 0 zone resets 00:23:07.691 slat (usec): min=12, max=306, avg=37.10, stdev=25.14 00:23:07.691 clat (msec): min=271, max=1512, avg=957.44, stdev=177.71 00:23:07.691 lat (msec): min=272, max=1512, avg=957.48, stdev=177.71 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 372], 5.00th=[ 617], 10.00th=[ 751], 20.00th=[ 877], 00:23:07.692 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 986], 60.00th=[ 1003], 00:23:07.692 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1045], 95.00th=[ 1250], 00:23:07.692 | 99.00th=[ 1469], 99.50th=[ 1502], 99.90th=[ 1519], 99.95th=[ 1519], 00:23:07.692 | 99.99th=[ 1519] 00:23:07.692 bw ( KiB/s): min= 2052, max= 7936, per=3.03%, avg=7010.10, stdev=1758.62, samples=10 00:23:07.692 iops : min= 16, max= 62, avg=54.60, stdev=13.69, samples=10 00:23:07.692 lat (msec) : 50=1.53%, 100=43.39%, 250=6.68%, 500=2.64%, 750=4.17% 00:23:07.692 lat (msec) : 1000=23.23%, 2000=18.36% 00:23:07.692 cpu : usr=0.16%, sys=0.40%, ctx=417, majf=0, minf=1 00:23:07.692 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:23:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.692 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.692 issued rwts: total=386,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.692 job13: (groupid=0, jobs=1): err= 0: pid=81828: Mon Jul 22 17:27:26 2024 00:23:07.692 read: IOPS=64, BW=8239KiB/s (8437kB/s)(44.6MiB/5546msec) 00:23:07.692 slat (usec): min=9, max=127, avg=28.11, stdev=15.70 00:23:07.692 clat (msec): min=46, max=589, avg=87.19, stdev=60.06 00:23:07.692 lat (msec): min=46, max=589, avg=87.22, stdev=60.06 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 51], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.692 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.692 | 70.00th=[ 71], 80.00th=[ 84], 90.00th=[ 150], 95.00th=[ 197], 00:23:07.692 | 99.00th=[ 268], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.692 | 99.99th=[ 592] 00:23:07.692 bw ( KiB/s): min= 4352, max=15840, per=3.89%, avg=9054.00, stdev=2833.36, samples=10 00:23:07.692 iops : min= 34, max= 123, avg=70.50, stdev=21.96, samples=10 00:23:07.692 write: IOPS=60, BW=7709KiB/s (7894kB/s)(41.8MiB/5546msec); 0 zone resets 00:23:07.692 slat (nsec): min=12172, max=94482, avg=33181.62, stdev=14247.02 00:23:07.692 clat (msec): min=267, max=1553, avg=967.54, stdev=180.13 00:23:07.692 lat (msec): min=267, max=1553, avg=967.58, stdev=180.13 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 326], 5.00th=[ 592], 10.00th=[ 760], 20.00th=[ 919], 00:23:07.692 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1011], 00:23:07.692 | 70.00th=[ 1020], 80.00th=[ 1036], 90.00th=[ 1053], 95.00th=[ 1250], 00:23:07.692 | 99.00th=[ 1519], 99.50th=[ 1552], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.692 | 99.99th=[ 1552] 00:23:07.692 bw ( KiB/s): min= 2043, max= 7936, per=3.03%, avg=7009.30, stdev=1758.10, samples=10 00:23:07.692 iops : min= 15, max= 62, avg=54.50, stdev=14.00, samples=10 00:23:07.692 lat (msec) : 50=0.43%, 100=42.26%, 250=8.39%, 500=1.30%, 750=3.91% 00:23:07.692 lat (msec) : 1000=22.29%, 2000=21.42% 00:23:07.692 cpu : usr=0.13%, sys=0.38%, ctx=404, majf=0, minf=1 00:23:07.692 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:23:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.692 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.692 issued rwts: total=357,334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.692 job14: (groupid=0, jobs=1): err= 0: pid=81829: Mon Jul 22 17:27:26 2024 00:23:07.692 read: IOPS=57, BW=7401KiB/s (7579kB/s)(40.2MiB/5569msec) 00:23:07.692 slat (usec): min=10, max=1724, avg=51.14, stdev=151.36 00:23:07.692 clat (msec): min=7, max=614, avg=88.60, stdev=75.50 00:23:07.692 lat (msec): min=7, max=614, avg=88.65, stdev=75.51 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 9], 5.00th=[ 51], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.692 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.692 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 134], 95.00th=[ 305], 00:23:07.692 | 99.00th=[ 326], 99.50th=[ 617], 99.90th=[ 617], 99.95th=[ 617], 00:23:07.692 | 99.99th=[ 617] 00:23:07.692 bw ( KiB/s): min= 6144, max=11543, per=3.52%, avg=8192.80, stdev=1660.86, samples=10 00:23:07.692 iops : min= 48, max= 90, avg=63.90, stdev=13.00, samples=10 00:23:07.692 write: IOPS=60, BW=7746KiB/s (7932kB/s)(42.1MiB/5569msec); 0 zone resets 00:23:07.692 slat (usec): min=11, max=825, avg=44.69, stdev=63.65 00:23:07.692 clat (msec): min=67, max=1592, avg=971.01, stdev=191.71 00:23:07.692 lat (msec): min=67, max=1592, avg=971.06, stdev=191.71 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 288], 5.00th=[ 609], 10.00th=[ 743], 20.00th=[ 927], 00:23:07.692 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.692 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1284], 00:23:07.692 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1586], 99.95th=[ 1586], 00:23:07.692 | 99.99th=[ 1586] 00:23:07.692 bw ( KiB/s): min= 2308, max= 7936, per=3.06%, avg=7064.50, stdev=1693.08, samples=10 00:23:07.692 iops : min= 18, max= 62, avg=55.10, stdev=13.22, samples=10 00:23:07.692 lat (msec) : 10=0.91%, 20=0.46%, 50=0.91%, 100=40.82%, 250=2.58% 00:23:07.692 lat (msec) : 500=4.25%, 750=4.40%, 1000=23.98%, 2000=21.70% 00:23:07.692 cpu : usr=0.22%, sys=0.29%, ctx=431, majf=0, minf=1 00:23:07.692 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:23:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.692 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.692 issued rwts: total=322,337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.692 job15: (groupid=0, jobs=1): err= 0: pid=81830: Mon Jul 22 17:27:26 2024 00:23:07.692 read: IOPS=58, BW=7474KiB/s (7653kB/s)(40.5MiB/5549msec) 00:23:07.692 slat (usec): min=9, max=458, avg=30.34, stdev=32.85 00:23:07.692 clat (msec): min=46, max=561, avg=88.07, stdev=53.59 00:23:07.692 lat (msec): min=46, max=561, avg=88.10, stdev=53.59 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 66], 00:23:07.692 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.692 | 70.00th=[ 71], 80.00th=[ 87], 90.00th=[ 159], 95.00th=[ 215], 00:23:07.692 | 99.00th=[ 249], 99.50th=[ 266], 99.90th=[ 558], 99.95th=[ 558], 00:23:07.692 | 99.99th=[ 558] 00:23:07.692 bw ( KiB/s): min= 5365, max=14592, per=3.55%, avg=8266.20, stdev=2640.62, samples=10 00:23:07.692 iops : min= 41, max= 114, avg=64.40, stdev=20.79, samples=10 00:23:07.692 write: IOPS=60, BW=7774KiB/s (7960kB/s)(42.1MiB/5549msec); 0 zone resets 00:23:07.692 slat (usec): min=14, max=3361, avg=49.22, stdev=198.97 00:23:07.692 clat (msec): min=267, max=1555, avg=966.30, stdev=185.16 00:23:07.692 lat (msec): min=270, max=1555, avg=966.35, stdev=185.11 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 351], 5.00th=[ 592], 10.00th=[ 709], 20.00th=[ 894], 00:23:07.692 | 30.00th=[ 953], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1011], 00:23:07.692 | 70.00th=[ 1028], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1301], 00:23:07.692 | 99.00th=[ 1469], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.692 | 99.99th=[ 1552] 00:23:07.692 bw ( KiB/s): min= 2048, max= 7936, per=3.05%, avg=7037.00, stdev=1770.07, samples=10 00:23:07.692 iops : min= 16, max= 62, avg=54.80, stdev=13.81, samples=10 00:23:07.692 lat (msec) : 50=0.61%, 100=39.33%, 250=8.62%, 500=1.51%, 750=4.69% 00:23:07.692 lat (msec) : 1000=21.33%, 2000=23.90% 00:23:07.692 cpu : usr=0.11%, sys=0.38%, ctx=426, majf=0, minf=1 00:23:07.692 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:23:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.692 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.692 issued rwts: total=324,337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.692 job16: (groupid=0, jobs=1): err= 0: pid=81832: Mon Jul 22 17:27:26 2024 00:23:07.692 read: IOPS=67, BW=8598KiB/s (8804kB/s)(46.8MiB/5568msec) 00:23:07.692 slat (usec): min=7, max=2524, avg=46.55, stdev=155.35 00:23:07.692 clat (msec): min=15, max=605, avg=89.05, stdev=62.57 00:23:07.692 lat (msec): min=15, max=605, avg=89.09, stdev=62.55 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 29], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.692 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.692 | 70.00th=[ 71], 80.00th=[ 87], 90.00th=[ 159], 95.00th=[ 228], 00:23:07.692 | 99.00th=[ 305], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:23:07.692 | 99.99th=[ 609] 00:23:07.692 bw ( KiB/s): min= 6400, max=17699, per=4.09%, avg=9526.70, stdev=3353.41, samples=10 00:23:07.692 iops : min= 50, max= 138, avg=74.40, stdev=26.12, samples=10 00:23:07.692 write: IOPS=60, BW=7724KiB/s (7910kB/s)(42.0MiB/5568msec); 0 zone resets 00:23:07.692 slat (usec): min=11, max=1267, avg=53.10, stdev=111.47 00:23:07.692 clat (msec): min=123, max=1536, avg=958.44, stdev=184.97 00:23:07.692 lat (msec): min=124, max=1536, avg=958.49, stdev=184.94 00:23:07.692 clat percentiles (msec): 00:23:07.692 | 1.00th=[ 326], 5.00th=[ 609], 10.00th=[ 751], 20.00th=[ 852], 00:23:07.692 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 986], 60.00th=[ 995], 00:23:07.692 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1070], 95.00th=[ 1250], 00:23:07.692 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1536], 99.95th=[ 1536], 00:23:07.692 | 99.99th=[ 1536] 00:23:07.692 bw ( KiB/s): min= 256, max= 7936, per=2.77%, avg=6400.36, stdev=2639.99, samples=11 00:23:07.692 iops : min= 2, max= 62, avg=50.00, stdev=20.63, samples=11 00:23:07.692 lat (msec) : 20=0.28%, 50=1.69%, 100=40.99%, 250=8.45%, 500=2.11% 00:23:07.692 lat (msec) : 750=3.94%, 1000=24.37%, 2000=18.17% 00:23:07.692 cpu : usr=0.14%, sys=0.40%, ctx=500, majf=0, minf=1 00:23:07.692 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:23:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.692 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.692 issued rwts: total=374,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.692 job17: (groupid=0, jobs=1): err= 0: pid=81836: Mon Jul 22 17:27:26 2024 00:23:07.692 read: IOPS=54, BW=6915KiB/s (7081kB/s)(37.5MiB/5553msec) 00:23:07.692 slat (usec): min=7, max=1494, avg=35.21, stdev=92.29 00:23:07.692 clat (msec): min=17, max=587, avg=92.26, stdev=77.42 00:23:07.692 lat (msec): min=17, max=587, avg=92.30, stdev=77.43 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 65], 00:23:07.693 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:23:07.693 | 70.00th=[ 71], 80.00th=[ 88], 90.00th=[ 155], 95.00th=[ 215], 00:23:07.693 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.693 | 99.99th=[ 592] 00:23:07.693 bw ( KiB/s): min= 2560, max=16160, per=3.24%, avg=7555.20, stdev=3570.66, samples=10 00:23:07.693 iops : min= 20, max= 126, avg=59.00, stdev=27.83, samples=10 00:23:07.693 write: IOPS=59, BW=7676KiB/s (7860kB/s)(41.6MiB/5553msec); 0 zone resets 00:23:07.693 slat (usec): min=8, max=407, avg=33.84, stdev=24.84 00:23:07.693 clat (msec): min=246, max=1487, avg=982.20, stdev=174.77 00:23:07.693 lat (msec): min=246, max=1487, avg=982.23, stdev=174.78 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 321], 5.00th=[ 617], 10.00th=[ 785], 20.00th=[ 944], 00:23:07.693 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.693 | 70.00th=[ 1028], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1250], 00:23:07.693 | 99.00th=[ 1435], 99.50th=[ 1469], 99.90th=[ 1485], 99.95th=[ 1485], 00:23:07.693 | 99.99th=[ 1485] 00:23:07.693 bw ( KiB/s): min= 256, max= 7936, per=2.77%, avg=6400.36, stdev=2639.99, samples=11 00:23:07.693 iops : min= 2, max= 62, avg=50.00, stdev=20.63, samples=11 00:23:07.693 lat (msec) : 20=0.32%, 50=1.74%, 100=37.12%, 250=6.79%, 500=1.90% 00:23:07.693 lat (msec) : 750=4.11%, 1000=20.54%, 2000=27.49% 00:23:07.693 cpu : usr=0.09%, sys=0.40%, ctx=409, majf=0, minf=1 00:23:07.693 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:23:07.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.693 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.693 issued rwts: total=300,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.693 job18: (groupid=0, jobs=1): err= 0: pid=81837: Mon Jul 22 17:27:26 2024 00:23:07.693 read: IOPS=59, BW=7637KiB/s (7820kB/s)(41.2MiB/5531msec) 00:23:07.693 slat (usec): min=10, max=108, avg=28.99, stdev=15.18 00:23:07.693 clat (msec): min=49, max=575, avg=89.02, stdev=61.66 00:23:07.693 lat (msec): min=49, max=576, avg=89.05, stdev=61.66 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 51], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 66], 00:23:07.693 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.693 | 70.00th=[ 71], 80.00th=[ 82], 90.00th=[ 161], 95.00th=[ 197], 00:23:07.693 | 99.00th=[ 253], 99.50th=[ 558], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.693 | 99.99th=[ 575] 00:23:07.693 bw ( KiB/s): min= 6656, max=14592, per=3.59%, avg=8366.40, stdev=2284.95, samples=10 00:23:07.693 iops : min= 52, max= 114, avg=65.10, stdev=17.93, samples=10 00:23:07.693 write: IOPS=60, BW=7776KiB/s (7962kB/s)(42.0MiB/5531msec); 0 zone resets 00:23:07.693 slat (nsec): min=11040, max=92981, avg=31239.43, stdev=13730.46 00:23:07.693 clat (msec): min=252, max=1522, avg=964.26, stdev=181.64 00:23:07.693 lat (msec): min=252, max=1522, avg=964.29, stdev=181.64 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 347], 5.00th=[ 592], 10.00th=[ 726], 20.00th=[ 911], 00:23:07.693 | 30.00th=[ 961], 40.00th=[ 986], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.693 | 70.00th=[ 1011], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1250], 00:23:07.693 | 99.00th=[ 1435], 99.50th=[ 1502], 99.90th=[ 1519], 99.95th=[ 1519], 00:23:07.693 | 99.99th=[ 1519] 00:23:07.693 bw ( KiB/s): min= 2304, max= 7936, per=3.06%, avg=7061.10, stdev=1693.89, samples=10 00:23:07.693 iops : min= 18, max= 62, avg=54.90, stdev=13.19, samples=10 00:23:07.693 lat (msec) : 50=0.30%, 100=40.39%, 250=7.96%, 500=1.80%, 750=4.65% 00:23:07.693 lat (msec) : 1000=23.72%, 2000=21.17% 00:23:07.693 cpu : usr=0.16%, sys=0.36%, ctx=390, majf=0, minf=1 00:23:07.693 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:23:07.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.693 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.693 issued rwts: total=330,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.693 job19: (groupid=0, jobs=1): err= 0: pid=81838: Mon Jul 22 17:27:26 2024 00:23:07.693 read: IOPS=66, BW=8529KiB/s (8733kB/s)(46.2MiB/5553msec) 00:23:07.693 slat (usec): min=10, max=366, avg=27.87, stdev=23.73 00:23:07.693 clat (msec): min=46, max=606, avg=89.11, stdev=63.18 00:23:07.693 lat (msec): min=46, max=606, avg=89.14, stdev=63.18 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.693 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.693 | 70.00th=[ 71], 80.00th=[ 86], 90.00th=[ 157], 95.00th=[ 211], 00:23:07.693 | 99.00th=[ 284], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:23:07.693 | 99.99th=[ 609] 00:23:07.693 bw ( KiB/s): min= 256, max=16896, per=3.67%, avg=8541.09, stdev=4033.56, samples=11 00:23:07.693 iops : min= 2, max= 132, avg=66.73, stdev=31.51, samples=11 00:23:07.693 write: IOPS=60, BW=7699KiB/s (7884kB/s)(41.8MiB/5553msec); 0 zone resets 00:23:07.693 slat (usec): min=14, max=7222, avg=61.07, stdev=398.63 00:23:07.693 clat (msec): min=275, max=1568, avg=962.03, stdev=186.35 00:23:07.693 lat (msec): min=282, max=1568, avg=962.09, stdev=186.27 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 342], 5.00th=[ 625], 10.00th=[ 743], 20.00th=[ 877], 00:23:07.693 | 30.00th=[ 953], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.693 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1053], 95.00th=[ 1301], 00:23:07.693 | 99.00th=[ 1519], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:23:07.693 | 99.99th=[ 1569] 00:23:07.693 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1770.33, samples=10 00:23:07.693 iops : min= 16, max= 62, avg=54.80, stdev=13.83, samples=10 00:23:07.693 lat (msec) : 50=0.57%, 100=43.04%, 250=7.95%, 500=1.70%, 750=4.26% 00:23:07.693 lat (msec) : 1000=23.58%, 2000=18.89% 00:23:07.693 cpu : usr=0.14%, sys=0.34%, ctx=439, majf=0, minf=1 00:23:07.693 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:23:07.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.693 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.693 issued rwts: total=370,334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.693 job20: (groupid=0, jobs=1): err= 0: pid=81839: Mon Jul 22 17:27:26 2024 00:23:07.693 read: IOPS=57, BW=7343KiB/s (7519kB/s)(39.9MiB/5561msec) 00:23:07.693 slat (nsec): min=7244, max=79910, avg=24986.94, stdev=12434.00 00:23:07.693 clat (usec): min=1602, max=604603, avg=87810.21, stdev=75289.08 00:23:07.693 lat (usec): min=1614, max=604639, avg=87835.19, stdev=75287.47 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 4], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 64], 00:23:07.693 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 67], 60.00th=[ 69], 00:23:07.693 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 144], 95.00th=[ 275], 00:23:07.693 | 99.00th=[ 300], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:23:07.693 | 99.99th=[ 609] 00:23:07.693 bw ( KiB/s): min= 4096, max=12032, per=3.47%, avg=8088.20, stdev=2160.31, samples=10 00:23:07.693 iops : min= 32, max= 94, avg=63.10, stdev=16.93, samples=10 00:23:07.693 write: IOPS=60, BW=7734KiB/s (7919kB/s)(42.0MiB/5561msec); 0 zone resets 00:23:07.693 slat (nsec): min=7850, max=78930, avg=30669.42, stdev=13330.14 00:23:07.693 clat (msec): min=9, max=1582, avg=973.89, stdev=205.85 00:23:07.693 lat (msec): min=9, max=1582, avg=973.92, stdev=205.86 00:23:07.693 clat percentiles (msec): 00:23:07.693 | 1.00th=[ 232], 5.00th=[ 592], 10.00th=[ 718], 20.00th=[ 944], 00:23:07.693 | 30.00th=[ 969], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1011], 00:23:07.693 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1099], 95.00th=[ 1351], 00:23:07.693 | 99.00th=[ 1519], 99.50th=[ 1569], 99.90th=[ 1586], 99.95th=[ 1586], 00:23:07.693 | 99.99th=[ 1586] 00:23:07.693 bw ( KiB/s): min= 2816, max= 7936, per=3.06%, avg=7064.10, stdev=1517.77, samples=10 00:23:07.693 iops : min= 22, max= 62, avg=55.10, stdev=11.86, samples=10 00:23:07.693 lat (msec) : 2=0.15%, 4=0.76%, 10=1.07%, 50=0.61%, 100=40.00% 00:23:07.693 lat (msec) : 250=3.21%, 500=4.12%, 750=4.58%, 1000=21.53%, 2000=23.97% 00:23:07.693 cpu : usr=0.23%, sys=0.23%, ctx=383, majf=0, minf=1 00:23:07.694 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:23:07.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.694 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.694 issued rwts: total=319,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.694 job21: (groupid=0, jobs=1): err= 0: pid=81840: Mon Jul 22 17:27:26 2024 00:23:07.694 read: IOPS=56, BW=7233KiB/s (7407kB/s)(39.1MiB/5539msec) 00:23:07.694 slat (usec): min=10, max=898, avg=30.33, stdev=51.25 00:23:07.694 clat (msec): min=49, max=571, avg=84.37, stdev=49.84 00:23:07.694 lat (msec): min=49, max=571, avg=84.40, stdev=49.84 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 51], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 65], 00:23:07.694 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.694 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 138], 95.00th=[ 203], 00:23:07.694 | 99.00th=[ 253], 99.50th=[ 257], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.694 | 99.99th=[ 575] 00:23:07.694 bw ( KiB/s): min= 5120, max=12288, per=3.42%, avg=7981.30, stdev=2435.43, samples=10 00:23:07.694 iops : min= 40, max= 96, avg=62.00, stdev=19.16, samples=10 00:23:07.694 write: IOPS=60, BW=7788KiB/s (7975kB/s)(42.1MiB/5539msec); 0 zone resets 00:23:07.694 slat (usec): min=9, max=1634, avg=38.19, stdev=88.82 00:23:07.694 clat (msec): min=260, max=1549, avg=971.33, stdev=190.63 00:23:07.694 lat (msec): min=260, max=1549, avg=971.37, stdev=190.61 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 338], 5.00th=[ 584], 10.00th=[ 701], 20.00th=[ 919], 00:23:07.694 | 30.00th=[ 953], 40.00th=[ 969], 50.00th=[ 995], 60.00th=[ 1020], 00:23:07.694 | 70.00th=[ 1036], 80.00th=[ 1062], 90.00th=[ 1083], 95.00th=[ 1301], 00:23:07.694 | 99.00th=[ 1502], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.694 | 99.99th=[ 1552] 00:23:07.694 bw ( KiB/s): min= 2304, max= 7936, per=3.04%, avg=7033.90, stdev=1684.99, samples=10 00:23:07.694 iops : min= 18, max= 62, avg=54.60, stdev=13.10, samples=10 00:23:07.694 lat (msec) : 50=0.31%, 100=40.46%, 250=6.62%, 500=2.00%, 750=4.62% 00:23:07.694 lat (msec) : 1000=21.38%, 2000=24.62% 00:23:07.694 cpu : usr=0.11%, sys=0.36%, ctx=394, majf=0, minf=1 00:23:07.694 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:23:07.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.694 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.694 issued rwts: total=313,337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.694 job22: (groupid=0, jobs=1): err= 0: pid=81841: Mon Jul 22 17:27:26 2024 00:23:07.694 read: IOPS=67, BW=8610KiB/s (8817kB/s)(46.5MiB/5530msec) 00:23:07.694 slat (usec): min=8, max=516, avg=28.79, stdev=28.90 00:23:07.694 clat (msec): min=48, max=584, avg=86.12, stdev=65.12 00:23:07.694 lat (msec): min=48, max=584, avg=86.15, stdev=65.12 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.694 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.694 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 140], 95.00th=[ 194], 00:23:07.694 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 584], 99.95th=[ 584], 00:23:07.694 | 99.99th=[ 584] 00:23:07.694 bw ( KiB/s): min= 256, max=13312, per=3.66%, avg=8540.18, stdev=3261.26, samples=11 00:23:07.694 iops : min= 2, max= 104, avg=66.55, stdev=25.51, samples=11 00:23:07.694 write: IOPS=60, BW=7708KiB/s (7893kB/s)(41.6MiB/5530msec); 0 zone resets 00:23:07.694 slat (usec): min=9, max=281, avg=34.76, stdev=21.34 00:23:07.694 clat (msec): min=258, max=1531, avg=964.81, stdev=180.65 00:23:07.694 lat (msec): min=258, max=1531, avg=964.85, stdev=180.65 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 326], 5.00th=[ 617], 10.00th=[ 735], 20.00th=[ 936], 00:23:07.694 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 986], 60.00th=[ 995], 00:23:07.694 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1083], 95.00th=[ 1267], 00:23:07.694 | 99.00th=[ 1502], 99.50th=[ 1519], 99.90th=[ 1536], 99.95th=[ 1536], 00:23:07.694 | 99.99th=[ 1536] 00:23:07.694 bw ( KiB/s): min= 2048, max= 7936, per=3.05%, avg=7038.40, stdev=1769.95, samples=10 00:23:07.694 iops : min= 16, max= 62, avg=54.80, stdev=13.79, samples=10 00:23:07.694 lat (msec) : 50=0.14%, 100=44.68%, 250=7.23%, 500=1.13%, 750=4.82% 00:23:07.694 lat (msec) : 1000=25.67%, 2000=16.31% 00:23:07.694 cpu : usr=0.16%, sys=0.40%, ctx=398, majf=0, minf=1 00:23:07.694 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:23:07.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.694 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.694 issued rwts: total=372,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.694 job23: (groupid=0, jobs=1): err= 0: pid=81842: Mon Jul 22 17:27:26 2024 00:23:07.694 read: IOPS=54, BW=6969KiB/s (7137kB/s)(37.9MiB/5565msec) 00:23:07.694 slat (usec): min=7, max=398, avg=28.79, stdev=34.52 00:23:07.694 clat (msec): min=13, max=597, avg=82.63, stdev=59.22 00:23:07.694 lat (msec): min=13, max=597, avg=82.65, stdev=59.23 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 64], 00:23:07.694 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.694 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 125], 95.00th=[ 165], 00:23:07.694 | 99.00th=[ 288], 99.50th=[ 567], 99.90th=[ 600], 99.95th=[ 600], 00:23:07.694 | 99.99th=[ 600] 00:23:07.694 bw ( KiB/s): min= 256, max=15647, per=3.01%, avg=7007.91, stdev=4077.32, samples=11 00:23:07.694 iops : min= 2, max= 122, avg=54.73, stdev=31.80, samples=11 00:23:07.694 write: IOPS=60, BW=7705KiB/s (7890kB/s)(41.9MiB/5565msec); 0 zone resets 00:23:07.694 slat (usec): min=9, max=209, avg=29.16, stdev=15.37 00:23:07.694 clat (msec): min=250, max=1609, avg=986.63, stdev=189.10 00:23:07.694 lat (msec): min=250, max=1609, avg=986.66, stdev=189.11 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 317], 5.00th=[ 617], 10.00th=[ 760], 20.00th=[ 944], 00:23:07.694 | 30.00th=[ 969], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.694 | 70.00th=[ 1028], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1368], 00:23:07.694 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:23:07.694 | 99.99th=[ 1603] 00:23:07.694 bw ( KiB/s): min= 2052, max= 7936, per=3.04%, avg=7014.80, stdev=1773.20, samples=10 00:23:07.694 iops : min= 16, max= 62, avg=54.80, stdev=13.86, samples=10 00:23:07.694 lat (msec) : 20=0.31%, 50=2.19%, 100=38.71%, 250=4.86%, 500=2.35% 00:23:07.694 lat (msec) : 750=4.23%, 1000=18.65%, 2000=28.68% 00:23:07.694 cpu : usr=0.14%, sys=0.27%, ctx=435, majf=0, minf=1 00:23:07.694 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:23:07.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.694 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.694 issued rwts: total=303,335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.694 job24: (groupid=0, jobs=1): err= 0: pid=81843: Mon Jul 22 17:27:26 2024 00:23:07.694 read: IOPS=64, BW=8304KiB/s (8503kB/s)(44.9MiB/5534msec) 00:23:07.694 slat (usec): min=9, max=2342, avg=37.61, stdev=124.53 00:23:07.694 clat (msec): min=49, max=584, avg=89.23, stdev=71.03 00:23:07.694 lat (msec): min=49, max=584, avg=89.26, stdev=71.03 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 50], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.694 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.694 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 150], 95.00th=[ 224], 00:23:07.694 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:23:07.694 | 99.99th=[ 584] 00:23:07.694 bw ( KiB/s): min= 256, max=13056, per=3.54%, avg=8238.64, stdev=3164.83, samples=11 00:23:07.694 iops : min= 2, max= 102, avg=64.27, stdev=24.72, samples=11 00:23:07.694 write: IOPS=60, BW=7702KiB/s (7887kB/s)(41.6MiB/5534msec); 0 zone resets 00:23:07.694 slat (usec): min=13, max=627, avg=42.39, stdev=45.74 00:23:07.694 clat (msec): min=266, max=1531, avg=965.15, stdev=182.08 00:23:07.694 lat (msec): min=266, max=1531, avg=965.20, stdev=182.09 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 351], 5.00th=[ 584], 10.00th=[ 743], 20.00th=[ 919], 00:23:07.694 | 30.00th=[ 961], 40.00th=[ 978], 50.00th=[ 995], 60.00th=[ 1003], 00:23:07.694 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1062], 95.00th=[ 1284], 00:23:07.694 | 99.00th=[ 1502], 99.50th=[ 1519], 99.90th=[ 1536], 99.95th=[ 1536], 00:23:07.694 | 99.99th=[ 1536] 00:23:07.694 bw ( KiB/s): min= 2048, max= 7936, per=3.05%, avg=7039.90, stdev=1778.97, samples=10 00:23:07.694 iops : min= 16, max= 62, avg=54.90, stdev=13.89, samples=10 00:23:07.694 lat (msec) : 50=0.72%, 100=43.64%, 250=5.64%, 500=2.31%, 750=4.48% 00:23:07.694 lat (msec) : 1000=23.41%, 2000=19.80% 00:23:07.694 cpu : usr=0.25%, sys=0.31%, ctx=421, majf=0, minf=1 00:23:07.694 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:23:07.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.694 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.694 issued rwts: total=359,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.694 job25: (groupid=0, jobs=1): err= 0: pid=81844: Mon Jul 22 17:27:26 2024 00:23:07.694 read: IOPS=63, BW=8135KiB/s (8331kB/s)(44.1MiB/5554msec) 00:23:07.694 slat (usec): min=8, max=114, avg=23.30, stdev=11.97 00:23:07.694 clat (msec): min=11, max=598, avg=84.96, stdev=69.34 00:23:07.694 lat (msec): min=11, max=598, avg=84.99, stdev=69.34 00:23:07.694 clat percentiles (msec): 00:23:07.694 | 1.00th=[ 32], 5.00th=[ 51], 10.00th=[ 63], 20.00th=[ 64], 00:23:07.694 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:23:07.694 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 132], 95.00th=[ 220], 00:23:07.695 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 600], 99.95th=[ 600], 00:23:07.695 | 99.99th=[ 600] 00:23:07.695 bw ( KiB/s): min= 256, max=14336, per=3.49%, avg=8122.18, stdev=3394.47, samples=11 00:23:07.695 iops : min= 2, max= 112, avg=63.45, stdev=26.52, samples=11 00:23:07.695 write: IOPS=59, BW=7674KiB/s (7859kB/s)(41.6MiB/5554msec); 0 zone resets 00:23:07.695 slat (nsec): min=10915, max=80097, avg=28399.04, stdev=10752.49 00:23:07.695 clat (msec): min=248, max=1551, avg=975.54, stdev=180.94 00:23:07.695 lat (msec): min=249, max=1551, avg=975.57, stdev=180.94 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 330], 5.00th=[ 575], 10.00th=[ 735], 20.00th=[ 944], 00:23:07.695 | 30.00th=[ 978], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1011], 00:23:07.695 | 70.00th=[ 1028], 80.00th=[ 1036], 90.00th=[ 1053], 95.00th=[ 1284], 00:23:07.695 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.695 | 99.99th=[ 1552] 00:23:07.695 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1770.33, samples=10 00:23:07.695 iops : min= 16, max= 62, avg=54.80, stdev=13.83, samples=10 00:23:07.695 lat (msec) : 20=0.29%, 50=1.75%, 100=43.29%, 250=4.37%, 500=2.33% 00:23:07.695 lat (msec) : 750=4.37%, 1000=19.10%, 2000=24.49% 00:23:07.695 cpu : usr=0.11%, sys=0.31%, ctx=427, majf=0, minf=1 00:23:07.695 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:23:07.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.695 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.695 issued rwts: total=353,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.695 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.695 job26: (groupid=0, jobs=1): err= 0: pid=81845: Mon Jul 22 17:27:26 2024 00:23:07.695 read: IOPS=54, BW=7025KiB/s (7194kB/s)(38.0MiB/5539msec) 00:23:07.695 slat (nsec): min=10588, max=73330, avg=27218.39, stdev=13063.33 00:23:07.695 clat (msec): min=47, max=576, avg=92.03, stdev=68.01 00:23:07.695 lat (msec): min=47, max=576, avg=92.05, stdev=68.01 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.695 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.695 | 70.00th=[ 72], 80.00th=[ 93], 90.00th=[ 159], 95.00th=[ 232], 00:23:07.695 | 99.00th=[ 275], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:23:07.695 | 99.99th=[ 575] 00:23:07.695 bw ( KiB/s): min= 5620, max=14592, per=3.30%, avg=7700.20, stdev=2575.57, samples=10 00:23:07.695 iops : min= 43, max= 114, avg=59.80, stdev=20.33, samples=10 00:23:07.695 write: IOPS=60, BW=7765KiB/s (7951kB/s)(42.0MiB/5539msec); 0 zone resets 00:23:07.695 slat (usec): min=13, max=127, avg=33.66, stdev=14.85 00:23:07.695 clat (msec): min=260, max=1538, avg=969.89, stdev=184.24 00:23:07.695 lat (msec): min=260, max=1538, avg=969.93, stdev=184.25 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 363], 5.00th=[ 592], 10.00th=[ 726], 20.00th=[ 894], 00:23:07.695 | 30.00th=[ 961], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:23:07.695 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1267], 00:23:07.695 | 99.00th=[ 1485], 99.50th=[ 1502], 99.90th=[ 1536], 99.95th=[ 1536], 00:23:07.695 | 99.99th=[ 1536] 00:23:07.695 bw ( KiB/s): min= 256, max= 7936, per=2.78%, avg=6417.73, stdev=2646.75, samples=11 00:23:07.695 iops : min= 2, max= 62, avg=49.82, stdev=20.54, samples=11 00:23:07.695 lat (msec) : 50=0.78%, 100=38.12%, 250=7.34%, 500=2.03%, 750=4.84% 00:23:07.695 lat (msec) : 1000=19.84%, 2000=27.03% 00:23:07.695 cpu : usr=0.13%, sys=0.38%, ctx=397, majf=0, minf=1 00:23:07.695 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:23:07.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.695 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.695 issued rwts: total=304,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.695 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.695 job27: (groupid=0, jobs=1): err= 0: pid=81846: Mon Jul 22 17:27:26 2024 00:23:07.695 read: IOPS=60, BW=7772KiB/s (7958kB/s)(42.2MiB/5567msec) 00:23:07.695 slat (nsec): min=7342, max=71154, avg=25751.56, stdev=12871.09 00:23:07.695 clat (msec): min=19, max=600, avg=85.73, stdev=51.98 00:23:07.695 lat (msec): min=19, max=600, avg=85.76, stdev=51.97 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 31], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.695 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:23:07.695 | 70.00th=[ 71], 80.00th=[ 89], 90.00th=[ 155], 95.00th=[ 197], 00:23:07.695 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 600], 99.95th=[ 600], 00:23:07.695 | 99.99th=[ 600] 00:23:07.695 bw ( KiB/s): min= 256, max=16896, per=3.37%, avg=7842.91, stdev=4005.84, samples=11 00:23:07.695 iops : min= 2, max= 132, avg=61.27, stdev=31.30, samples=11 00:23:07.695 write: IOPS=60, BW=7726KiB/s (7911kB/s)(42.0MiB/5567msec); 0 zone resets 00:23:07.695 slat (usec): min=10, max=1509, avg=34.82, stdev=81.77 00:23:07.695 clat (msec): min=268, max=1594, avg=972.01, stdev=184.50 00:23:07.695 lat (msec): min=270, max=1594, avg=972.05, stdev=184.48 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 330], 5.00th=[ 584], 10.00th=[ 751], 20.00th=[ 911], 00:23:07.695 | 30.00th=[ 969], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1011], 00:23:07.695 | 70.00th=[ 1020], 80.00th=[ 1028], 90.00th=[ 1062], 95.00th=[ 1301], 00:23:07.695 | 99.00th=[ 1552], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:23:07.695 | 99.99th=[ 1603] 00:23:07.695 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=7014.40, stdev=1774.44, samples=10 00:23:07.695 iops : min= 16, max= 62, avg=54.80, stdev=13.86, samples=10 00:23:07.695 lat (msec) : 20=0.30%, 50=1.19%, 100=39.32%, 250=8.61%, 500=1.78% 00:23:07.695 lat (msec) : 750=4.15%, 1000=18.55%, 2000=26.11% 00:23:07.695 cpu : usr=0.14%, sys=0.36%, ctx=394, majf=0, minf=1 00:23:07.695 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:23:07.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.695 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.695 issued rwts: total=338,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.695 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.695 job28: (groupid=0, jobs=1): err= 0: pid=81847: Mon Jul 22 17:27:26 2024 00:23:07.695 read: IOPS=70, BW=8991KiB/s (9207kB/s)(48.6MiB/5538msec) 00:23:07.695 slat (usec): min=8, max=193, avg=28.95, stdev=17.68 00:23:07.695 clat (msec): min=47, max=580, avg=89.59, stdev=61.88 00:23:07.695 lat (msec): min=47, max=580, avg=89.62, stdev=61.88 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 49], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.695 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.695 | 70.00th=[ 71], 80.00th=[ 86], 90.00th=[ 169], 95.00th=[ 211], 00:23:07.695 | 99.00th=[ 275], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 584], 00:23:07.695 | 99.99th=[ 584] 00:23:07.695 bw ( KiB/s): min= 6898, max=17408, per=4.24%, avg=9875.20, stdev=3176.97, samples=10 00:23:07.695 iops : min= 53, max= 136, avg=76.80, stdev=25.06, samples=10 00:23:07.695 write: IOPS=60, BW=7766KiB/s (7952kB/s)(42.0MiB/5538msec); 0 zone resets 00:23:07.695 slat (usec): min=10, max=132, avg=37.61, stdev=17.06 00:23:07.695 clat (msec): min=261, max=1496, avg=949.30, stdev=185.16 00:23:07.695 lat (msec): min=261, max=1496, avg=949.34, stdev=185.17 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 363], 5.00th=[ 575], 10.00th=[ 709], 20.00th=[ 844], 00:23:07.695 | 30.00th=[ 953], 40.00th=[ 961], 50.00th=[ 978], 60.00th=[ 995], 00:23:07.695 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1062], 95.00th=[ 1267], 00:23:07.695 | 99.00th=[ 1452], 99.50th=[ 1485], 99.90th=[ 1502], 99.95th=[ 1502], 00:23:07.695 | 99.99th=[ 1502] 00:23:07.695 bw ( KiB/s): min= 2304, max= 8175, per=3.06%, avg=7059.40, stdev=1705.49, samples=10 00:23:07.695 iops : min= 18, max= 63, avg=54.80, stdev=13.22, samples=10 00:23:07.695 lat (msec) : 50=0.97%, 100=43.17%, 250=8.41%, 500=1.93%, 750=5.52% 00:23:07.695 lat (msec) : 1000=22.21%, 2000=17.79% 00:23:07.695 cpu : usr=0.16%, sys=0.42%, ctx=455, majf=0, minf=1 00:23:07.695 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:23:07.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.695 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.695 issued rwts: total=389,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.695 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.695 job29: (groupid=0, jobs=1): err= 0: pid=81848: Mon Jul 22 17:27:26 2024 00:23:07.695 read: IOPS=67, BW=8634KiB/s (8841kB/s)(46.6MiB/5530msec) 00:23:07.695 slat (nsec): min=6883, max=91164, avg=25272.45, stdev=14380.89 00:23:07.695 clat (msec): min=43, max=590, avg=92.26, stdev=61.93 00:23:07.695 lat (msec): min=43, max=590, avg=92.29, stdev=61.92 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:23:07.695 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:23:07.695 | 70.00th=[ 72], 80.00th=[ 108], 90.00th=[ 186], 95.00th=[ 234], 00:23:07.695 | 99.00th=[ 271], 99.50th=[ 542], 99.90th=[ 592], 99.95th=[ 592], 00:23:07.695 | 99.99th=[ 592] 00:23:07.695 bw ( KiB/s): min= 4608, max=19238, per=4.07%, avg=9496.10, stdev=3979.87, samples=10 00:23:07.695 iops : min= 36, max= 150, avg=73.90, stdev=31.09, samples=10 00:23:07.695 write: IOPS=60, BW=7777KiB/s (7964kB/s)(42.0MiB/5530msec); 0 zone resets 00:23:07.695 slat (usec): min=8, max=256, avg=31.74, stdev=22.74 00:23:07.695 clat (msec): min=253, max=1553, avg=949.12, stdev=195.71 00:23:07.695 lat (msec): min=253, max=1553, avg=949.15, stdev=195.71 00:23:07.695 clat percentiles (msec): 00:23:07.695 | 1.00th=[ 372], 5.00th=[ 575], 10.00th=[ 676], 20.00th=[ 818], 00:23:07.695 | 30.00th=[ 944], 40.00th=[ 969], 50.00th=[ 986], 60.00th=[ 1003], 00:23:07.695 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1267], 00:23:07.695 | 99.00th=[ 1485], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:23:07.695 | 99.99th=[ 1552] 00:23:07.695 bw ( KiB/s): min= 2052, max= 7936, per=3.04%, avg=7035.90, stdev=1768.46, samples=10 00:23:07.695 iops : min= 16, max= 62, avg=54.70, stdev=13.78, samples=10 00:23:07.695 lat (msec) : 50=0.99%, 100=40.34%, 250=10.01%, 500=2.12%, 750=6.06% 00:23:07.695 lat (msec) : 1000=20.87%, 2000=19.61% 00:23:07.695 cpu : usr=0.16%, sys=0.31%, ctx=434, majf=0, minf=1 00:23:07.695 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:23:07.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.695 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:23:07.695 issued rwts: total=373,336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:07.696 00:23:07.696 Run status group 0 (all jobs): 00:23:07.696 READ: bw=228MiB/s (239MB/s), 6872KiB/s-8991KiB/s (7037kB/s-9207kB/s), io=1267MiB (1329MB), run=5530-5569msec 00:23:07.696 WRITE: bw=226MiB/s (237MB/s), 7665KiB/s-7866KiB/s (7849kB/s-8054kB/s), io=1257MiB (1318MB), run=5530-5569msec 00:23:07.696 00:23:07.696 Disk stats (read/write): 00:23:07.696 sda: ios=353/310, merge=0/0, ticks=25360/294461, in_queue=319822, util=89.78% 00:23:07.696 sdb: ios=347/310, merge=0/0, ticks=24696/296387, in_queue=321083, util=91.21% 00:23:07.696 sdc: ios=388/309, merge=0/0, ticks=28075/292886, in_queue=320962, util=91.24% 00:23:07.696 sdd: ios=365/310, merge=0/0, ticks=24701/297922, in_queue=322624, util=92.05% 00:23:07.696 sde: ios=380/309, merge=0/0, ticks=28469/292641, in_queue=321111, util=91.97% 00:23:07.696 sdf: ios=402/309, merge=0/0, ticks=27947/293819, in_queue=321767, util=92.19% 00:23:07.696 sdg: ios=380/309, merge=0/0, ticks=27406/291876, in_queue=319282, util=92.01% 00:23:07.696 sdh: ios=339/309, merge=0/0, ticks=25054/296757, in_queue=321812, util=92.17% 00:23:07.696 sdi: ios=321/309, merge=0/0, ticks=26197/293527, in_queue=319725, util=91.86% 00:23:07.696 sdj: ios=368/309, merge=0/0, ticks=31209/290037, in_queue=321246, util=92.35% 00:23:07.696 sdk: ios=381/309, merge=0/0, ticks=30756/290816, in_queue=321573, util=92.55% 00:23:07.696 sdl: ios=325/309, merge=0/0, ticks=28904/291548, in_queue=320453, util=93.03% 00:23:07.696 sdm: ios=386/309, merge=0/0, ticks=33107/288024, in_queue=321131, util=93.36% 00:23:07.696 sdn: ios=357/309, merge=0/0, ticks=29586/291462, in_queue=321048, util=93.93% 00:23:07.696 sdo: ios=322/311, merge=0/0, ticks=27365/294005, in_queue=321370, util=94.20% 00:23:07.696 sdp: ios=324/309, merge=0/0, ticks=28015/292047, in_queue=320062, util=94.33% 00:23:07.696 sdq: ios=374/310, merge=0/0, ticks=32150/289651, in_queue=321801, util=94.55% 00:23:07.696 sdr: ios=300/309, merge=0/0, ticks=25100/295608, in_queue=320709, util=95.41% 00:23:07.696 sds: ios=330/309, merge=0/0, ticks=27866/291943, in_queue=319809, util=94.96% 00:23:07.696 sdt: ios=370/309, merge=0/0, ticks=31385/289759, in_queue=321144, util=95.61% 00:23:07.696 sdu: ios=319/312, merge=0/0, ticks=26407/296083, in_queue=322490, util=96.21% 00:23:07.696 sdv: ios=313/309, merge=0/0, ticks=25891/294066, in_queue=319958, util=95.89% 00:23:07.696 sdw: ios=372/309, merge=0/0, ticks=29560/291548, in_queue=321109, util=96.03% 00:23:07.696 sdx: ios=303/309, merge=0/0, ticks=23978/297052, in_queue=321031, util=96.73% 00:23:07.696 sdy: ios=359/309, merge=0/0, ticks=29512/291383, in_queue=320895, util=96.26% 00:23:07.696 sdz: ios=353/309, merge=0/0, ticks=27875/294304, in_queue=322179, util=96.90% 00:23:07.696 sdaa: ios=304/309, merge=0/0, ticks=26454/293728, in_queue=320182, util=96.64% 00:23:07.696 sdab: ios=338/309, merge=0/0, ticks=28416/293046, in_queue=321462, util=97.18% 00:23:07.696 sdac: ios=389/309, merge=0/0, ticks=33300/286919, in_queue=320219, util=96.88% 00:23:07.696 sdad: ios=373/309, merge=0/0, ticks=33385/286655, in_queue=320041, util=97.31% 00:23:07.696 [2024-07-22 17:27:26.086512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.089109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.091619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.094175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.096668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.099198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.101741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.104495] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.107144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.110126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 17:27:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:23:07.696 [2024-07-22 17:27:26.113185] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [2024-07-22 17:27:26.116228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:07.696 [global] 00:23:07.696 thread=1 00:23:07.696 invalidate=1 00:23:07.696 rw=randwrite 00:23:07.696 time_based=1 00:23:07.696 runtime=10 00:23:07.696 ioengine=libaio 00:23:07.696 direct=1 00:23:07.696 bs=262144 00:23:07.696 iodepth=16 00:23:07.696 norandommap=1 00:23:07.696 numjobs=1 00:23:07.696 00:23:07.696 [job0] 00:23:07.696 filename=/dev/sda 00:23:07.696 [job1] 00:23:07.696 filename=/dev/sdb 00:23:07.696 [job2] 00:23:07.696 filename=/dev/sdc 00:23:07.696 [job3] 00:23:07.696 filename=/dev/sdd 00:23:07.696 [job4] 00:23:07.696 filename=/dev/sde 00:23:07.696 [job5] 00:23:07.696 filename=/dev/sdf 00:23:07.696 [job6] 00:23:07.696 filename=/dev/sdg 00:23:07.696 [job7] 00:23:07.696 filename=/dev/sdh 00:23:07.696 [job8] 00:23:07.696 filename=/dev/sdi 00:23:07.696 [job9] 00:23:07.696 filename=/dev/sdj 00:23:07.696 [job10] 00:23:07.696 filename=/dev/sdk 00:23:07.696 [job11] 00:23:07.696 filename=/dev/sdl 00:23:07.696 [job12] 00:23:07.696 filename=/dev/sdm 00:23:07.696 [job13] 00:23:07.696 filename=/dev/sdn 00:23:07.696 [job14] 00:23:07.696 filename=/dev/sdo 00:23:07.696 [job15] 00:23:07.696 filename=/dev/sdp 00:23:07.696 [job16] 00:23:07.696 filename=/dev/sdq 00:23:07.696 [job17] 00:23:07.696 filename=/dev/sdr 00:23:07.696 [job18] 00:23:07.696 filename=/dev/sds 00:23:07.696 [job19] 00:23:07.696 filename=/dev/sdt 00:23:07.696 [job20] 00:23:07.696 filename=/dev/sdu 00:23:07.696 [job21] 00:23:07.696 filename=/dev/sdv 00:23:07.696 [job22] 00:23:07.696 filename=/dev/sdw 00:23:07.696 [job23] 00:23:07.696 filename=/dev/sdx 00:23:07.696 [job24] 00:23:07.696 filename=/dev/sdy 00:23:07.696 [job25] 00:23:07.696 filename=/dev/sdz 00:23:07.696 [job26] 00:23:07.696 filename=/dev/sdaa 00:23:07.696 [job27] 00:23:07.696 filename=/dev/sdab 00:23:07.696 [job28] 00:23:07.696 filename=/dev/sdac 00:23:07.696 [job29] 00:23:07.696 filename=/dev/sdad 00:23:07.954 queue_depth set to 113 (sda) 00:23:07.954 queue_depth set to 113 (sdb) 00:23:07.954 queue_depth set to 113 (sdc) 00:23:07.954 queue_depth set to 113 (sdd) 00:23:07.954 queue_depth set to 113 (sde) 00:23:07.954 queue_depth set to 113 (sdf) 00:23:07.954 queue_depth set to 113 (sdg) 00:23:07.954 queue_depth set to 113 (sdh) 00:23:07.954 queue_depth set to 113 (sdi) 00:23:07.954 queue_depth set to 113 (sdj) 00:23:07.954 queue_depth set to 113 (sdk) 00:23:07.954 queue_depth set to 113 (sdl) 00:23:07.954 queue_depth set to 113 (sdm) 00:23:07.954 queue_depth set to 113 (sdn) 00:23:07.954 queue_depth set to 113 (sdo) 00:23:07.954 queue_depth set to 113 (sdp) 00:23:07.954 queue_depth set to 113 (sdq) 00:23:07.954 queue_depth set to 113 (sdr) 00:23:07.954 queue_depth set to 113 (sds) 00:23:07.954 queue_depth set to 113 (sdt) 00:23:07.954 queue_depth set to 113 (sdu) 00:23:07.954 queue_depth set to 113 (sdv) 00:23:07.954 queue_depth set to 113 (sdw) 00:23:07.954 queue_depth set to 113 (sdx) 00:23:07.954 queue_depth set to 113 (sdy) 00:23:07.954 queue_depth set to 113 (sdz) 00:23:07.954 queue_depth set to 113 (sdaa) 00:23:07.954 queue_depth set to 113 (sdab) 00:23:07.954 queue_depth set to 113 (sdac) 00:23:07.954 queue_depth set to 113 (sdad) 00:23:08.211 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:23:08.211 fio-3.35 00:23:08.211 Starting 30 threads 00:23:08.211 [2024-07-22 17:27:26.924330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.928900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.934170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.938758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.941688] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.944691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.947807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.950845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.953606] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.956588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.959917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.963412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.965884] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.968398] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.211 [2024-07-22 17:27:26.971050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.973403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.976024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.978583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.981160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.983870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.986358] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.988987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.991581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.994974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:26.998099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:27.001895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:27.004348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:27.007143] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:27.009533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:08.212 [2024-07-22 17:27:27.012545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.805104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.819933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.823903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.826673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.829586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.832641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.835362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.837888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.840674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.843567] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.846318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.849162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.851940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.854664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.857435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.860005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.862659] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 [2024-07-22 17:27:37.865277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.414 00:23:20.414 job0: (groupid=0, jobs=1): err= 0: pid=82348: Mon Jul 22 17:27:37 2024 00:23:20.414 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10244msec); 0 zone resets 00:23:20.414 slat (usec): min=24, max=406, avg=97.24, stdev=56.50 00:23:20.414 clat (msec): min=32, max=508, avg=286.00, stdev=33.55 00:23:20.414 lat (msec): min=32, max=508, avg=286.09, stdev=33.56 00:23:20.414 clat percentiles (msec): 00:23:20.414 | 1.00th=[ 121], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.414 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.414 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.414 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.414 | 99.99th=[ 510] 00:23:20.414 bw ( KiB/s): min=12800, max=15360, per=3.32%, avg=14257.65, stdev=504.23, samples=20 00:23:20.414 iops : min= 50, max= 60, avg=55.60, stdev= 1.98, samples=20 00:23:20.414 lat (msec) : 50=0.17%, 100=0.52%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.414 cpu : usr=0.17%, sys=0.33%, ctx=648, majf=0, minf=1 00:23:20.414 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.414 job1: (groupid=0, jobs=1): err= 0: pid=82349: Mon Jul 22 17:27:37 2024 00:23:20.414 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(143MiB/10247msec); 0 zone resets 00:23:20.414 slat (usec): min=33, max=147, avg=65.61, stdev=15.99 00:23:20.414 clat (msec): min=31, max=495, avg=285.64, stdev=32.74 00:23:20.414 lat (msec): min=31, max=495, avg=285.71, stdev=32.74 00:23:20.414 clat percentiles (msec): 00:23:20.414 | 1.00th=[ 118], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.414 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.414 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.414 | 99.00th=[ 409], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 498], 00:23:20.414 | 99.99th=[ 498] 00:23:20.414 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.30, stdev=532.01, samples=20 00:23:20.414 iops : min= 50, max= 58, avg=55.60, stdev= 2.09, samples=20 00:23:20.414 lat (msec) : 50=0.35%, 100=0.35%, 250=1.75%, 500=97.56% 00:23:20.414 cpu : usr=0.20%, sys=0.30%, ctx=579, majf=0, minf=1 00:23:20.414 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.414 job2: (groupid=0, jobs=1): err= 0: pid=82350: Mon Jul 22 17:27:37 2024 00:23:20.414 write: IOPS=56, BW=14.0MiB/s (14.7MB/s)(144MiB/10263msec); 0 zone resets 00:23:20.414 slat (usec): min=26, max=204, avg=55.46, stdev=16.53 00:23:20.414 clat (msec): min=6, max=517, avg=285.06, stdev=38.73 00:23:20.414 lat (msec): min=6, max=517, avg=285.11, stdev=38.73 00:23:20.414 clat percentiles (msec): 00:23:20.414 | 1.00th=[ 75], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.414 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.414 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.414 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.414 | 99.99th=[ 518] 00:23:20.414 bw ( KiB/s): min=12288, max=14848, per=3.34%, avg=14333.05, stdev=597.65, samples=20 00:23:20.414 iops : min= 48, max= 58, avg=55.90, stdev= 2.31, samples=20 00:23:20.414 lat (msec) : 10=0.17%, 20=0.17%, 50=0.35%, 100=0.52%, 250=1.39% 00:23:20.414 lat (msec) : 500=97.22%, 750=0.17% 00:23:20.414 cpu : usr=0.17%, sys=0.26%, ctx=576, majf=0, minf=1 00:23:20.414 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.414 issued rwts: total=0,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.414 job3: (groupid=0, jobs=1): err= 0: pid=82351: Mon Jul 22 17:27:37 2024 00:23:20.414 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(143MiB/10248msec); 0 zone resets 00:23:20.414 slat (usec): min=26, max=1394, avg=69.47, stdev=62.79 00:23:20.414 clat (msec): min=31, max=495, avg=285.62, stdev=32.77 00:23:20.414 lat (msec): min=31, max=495, avg=285.69, stdev=32.77 00:23:20.414 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 120], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 409], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 498], 00:23:20.415 | 99.99th=[ 498] 00:23:20.415 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.30, stdev=532.01, samples=20 00:23:20.415 iops : min= 50, max= 58, avg=55.60, stdev= 2.09, samples=20 00:23:20.415 lat (msec) : 50=0.35%, 100=0.35%, 250=1.75%, 500=97.56% 00:23:20.415 cpu : usr=0.20%, sys=0.24%, ctx=597, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job4: (groupid=0, jobs=1): err= 0: pid=82353: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10256msec); 0 zone resets 00:23:20.415 slat (usec): min=18, max=144, avg=58.94, stdev=15.60 00:23:20.415 clat (msec): min=20, max=514, avg=285.85, stdev=35.28 00:23:20.415 lat (msec): min=20, max=514, avg=285.91, stdev=35.28 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 108], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 514], 99.95th=[ 514], 00:23:20.415 | 99.99th=[ 514] 00:23:20.415 bw ( KiB/s): min=12288, max=15329, per=3.33%, avg=14283.15, stdev=615.40, samples=20 00:23:20.415 iops : min= 48, max= 59, avg=55.70, stdev= 2.34, samples=20 00:23:20.415 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.415 cpu : usr=0.16%, sys=0.30%, ctx=574, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job5: (groupid=0, jobs=1): err= 0: pid=82354: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=56, BW=14.0MiB/s (14.7MB/s)(144MiB/10263msec); 0 zone resets 00:23:20.415 slat (usec): min=27, max=367, avg=75.43, stdev=37.13 00:23:20.415 clat (msec): min=7, max=516, avg=285.03, stdev=38.57 00:23:20.415 lat (msec): min=7, max=517, avg=285.11, stdev=38.58 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 78], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.415 | 99.99th=[ 518] 00:23:20.415 bw ( KiB/s): min=12288, max=14848, per=3.34%, avg=14333.05, stdev=597.65, samples=20 00:23:20.415 iops : min= 48, max= 58, avg=55.90, stdev= 2.31, samples=20 00:23:20.415 lat (msec) : 10=0.17%, 20=0.17%, 50=0.35%, 100=0.52%, 250=1.39% 00:23:20.415 lat (msec) : 500=97.22%, 750=0.17% 00:23:20.415 cpu : usr=0.17%, sys=0.32%, ctx=621, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job6: (groupid=0, jobs=1): err= 0: pid=82384: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10241msec); 0 zone resets 00:23:20.415 slat (usec): min=18, max=217, avg=60.36, stdev=15.25 00:23:20.415 clat (msec): min=32, max=505, avg=285.96, stdev=33.46 00:23:20.415 lat (msec): min=32, max=506, avg=286.02, stdev=33.46 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 120], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 418], 99.50th=[ 472], 99.90th=[ 506], 99.95th=[ 506], 00:23:20.415 | 99.99th=[ 506] 00:23:20.415 bw ( KiB/s): min=12800, max=14877, per=3.32%, avg=14257.80, stdev=508.83, samples=20 00:23:20.415 iops : min= 50, max= 58, avg=55.60, stdev= 2.04, samples=20 00:23:20.415 lat (msec) : 50=0.35%, 100=0.35%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.415 cpu : usr=0.21%, sys=0.20%, ctx=575, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job7: (groupid=0, jobs=1): err= 0: pid=82385: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(143MiB/10248msec); 0 zone resets 00:23:20.415 slat (usec): min=30, max=382, avg=64.89, stdev=21.63 00:23:20.415 clat (msec): min=31, max=495, avg=285.65, stdev=32.77 00:23:20.415 lat (msec): min=31, max=495, avg=285.71, stdev=32.77 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 118], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 409], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 498], 00:23:20.415 | 99.99th=[ 498] 00:23:20.415 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.30, stdev=532.01, samples=20 00:23:20.415 iops : min= 50, max= 58, avg=55.60, stdev= 2.09, samples=20 00:23:20.415 lat (msec) : 50=0.35%, 100=0.35%, 250=1.75%, 500=97.56% 00:23:20.415 cpu : usr=0.27%, sys=0.22%, ctx=578, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job8: (groupid=0, jobs=1): err= 0: pid=82386: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=56, BW=14.1MiB/s (14.8MB/s)(145MiB/10270msec); 0 zone resets 00:23:20.415 slat (usec): min=26, max=900, avg=53.32, stdev=46.39 00:23:20.415 clat (msec): min=2, max=516, avg=283.75, stdev=42.78 00:23:20.415 lat (msec): min=2, max=516, avg=283.80, stdev=42.78 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 32], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 430], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.415 | 99.99th=[ 518] 00:23:20.415 bw ( KiB/s): min=12800, max=16384, per=3.35%, avg=14381.25, stdev=699.91, samples=20 00:23:20.415 iops : min= 50, max= 64, avg=56.00, stdev= 2.68, samples=20 00:23:20.415 lat (msec) : 4=0.17%, 10=0.35%, 20=0.35%, 50=0.35%, 100=0.35% 00:23:20.415 lat (msec) : 250=1.56%, 500=96.71%, 750=0.17% 00:23:20.415 cpu : usr=0.23%, sys=0.13%, ctx=599, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job9: (groupid=0, jobs=1): err= 0: pid=82389: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10257msec); 0 zone resets 00:23:20.415 slat (usec): min=29, max=197, avg=54.42, stdev=20.34 00:23:20.415 clat (msec): min=26, max=509, avg=285.89, stdev=34.31 00:23:20.415 lat (msec): min=26, max=509, avg=285.94, stdev=34.31 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 114], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 422], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.415 | 99.99th=[ 510] 00:23:20.415 bw ( KiB/s): min=12288, max=14848, per=3.33%, avg=14284.80, stdev=596.63, samples=20 00:23:20.415 iops : min= 48, max= 58, avg=55.80, stdev= 2.33, samples=20 00:23:20.415 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.415 cpu : usr=0.19%, sys=0.19%, ctx=617, majf=0, minf=1 00:23:20.415 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.415 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.415 job10: (groupid=0, jobs=1): err= 0: pid=82394: Mon Jul 22 17:27:37 2024 00:23:20.415 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10259msec); 0 zone resets 00:23:20.415 slat (usec): min=32, max=7670, avg=89.29, stdev=320.35 00:23:20.415 clat (msec): min=14, max=521, avg=285.69, stdev=37.22 00:23:20.415 lat (msec): min=21, max=521, avg=285.78, stdev=37.13 00:23:20.415 clat percentiles (msec): 00:23:20.415 | 1.00th=[ 95], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.415 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.415 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.415 | 99.00th=[ 435], 99.50th=[ 489], 99.90th=[ 523], 99.95th=[ 523], 00:23:20.415 | 99.99th=[ 523] 00:23:20.415 bw ( KiB/s): min=12800, max=15360, per=3.33%, avg=14283.35, stdev=495.45, samples=20 00:23:20.415 iops : min= 50, max= 60, avg=55.75, stdev= 1.94, samples=20 00:23:20.415 lat (msec) : 20=0.17%, 50=0.35%, 100=0.52%, 250=1.40%, 500=97.21% 00:23:20.415 lat (msec) : 750=0.35% 00:23:20.416 cpu : usr=0.16%, sys=0.25%, ctx=609, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job11: (groupid=0, jobs=1): err= 0: pid=82430: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10245msec); 0 zone resets 00:23:20.416 slat (usec): min=19, max=594, avg=65.78, stdev=31.42 00:23:20.416 clat (msec): min=31, max=509, avg=286.05, stdev=33.75 00:23:20.416 lat (msec): min=31, max=509, avg=286.11, stdev=33.76 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 120], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 422], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.416 | 99.99th=[ 510] 00:23:20.416 bw ( KiB/s): min=12800, max=15360, per=3.32%, avg=14259.05, stdev=502.92, samples=20 00:23:20.416 iops : min= 50, max= 60, avg=55.65, stdev= 1.93, samples=20 00:23:20.416 lat (msec) : 50=0.35%, 100=0.35%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.416 cpu : usr=0.17%, sys=0.32%, ctx=582, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job12: (groupid=0, jobs=1): err= 0: pid=82442: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(143MiB/10253msec); 0 zone resets 00:23:20.416 slat (usec): min=28, max=2657, avg=62.00, stdev=109.61 00:23:20.416 clat (msec): min=29, max=500, avg=285.72, stdev=33.32 00:23:20.416 lat (msec): min=31, max=500, avg=285.79, stdev=33.29 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 116], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 414], 99.50th=[ 468], 99.90th=[ 502], 99.95th=[ 502], 00:23:20.416 | 99.99th=[ 502] 00:23:20.416 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14259.20, stdev=532.47, samples=20 00:23:20.416 iops : min= 50, max= 58, avg=55.70, stdev= 2.08, samples=20 00:23:20.416 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.416 cpu : usr=0.19%, sys=0.20%, ctx=576, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job13: (groupid=0, jobs=1): err= 0: pid=82489: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(144MiB/10265msec); 0 zone resets 00:23:20.416 slat (usec): min=29, max=995, avg=60.42, stdev=42.01 00:23:20.416 clat (msec): min=14, max=516, avg=285.55, stdev=36.70 00:23:20.416 lat (msec): min=15, max=516, avg=285.61, stdev=36.69 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 95], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 430], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.416 | 99.99th=[ 518] 00:23:20.416 bw ( KiB/s): min=12800, max=14848, per=3.33%, avg=14283.25, stdev=493.90, samples=20 00:23:20.416 iops : min= 50, max= 58, avg=55.70, stdev= 1.89, samples=20 00:23:20.416 lat (msec) : 20=0.17%, 50=0.35%, 100=0.52%, 250=1.39%, 500=97.39% 00:23:20.416 lat (msec) : 750=0.17% 00:23:20.416 cpu : usr=0.21%, sys=0.19%, ctx=579, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job14: (groupid=0, jobs=1): err= 0: pid=82496: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=56, BW=14.1MiB/s (14.8MB/s)(145MiB/10263msec); 0 zone resets 00:23:20.416 slat (usec): min=18, max=1369, avg=62.38, stdev=56.34 00:23:20.416 clat (usec): min=879, max=520602, avg=282544.60, stdev=46998.95 00:23:20.416 lat (msec): min=2, max=520, avg=282.61, stdev=46.99 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 12], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 435], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:23:20.416 | 99.99th=[ 523] 00:23:20.416 bw ( KiB/s): min=12288, max=17920, per=3.37%, avg=14462.55, stdev=1008.94, samples=20 00:23:20.416 iops : min= 48, max= 70, avg=56.45, stdev= 3.95, samples=20 00:23:20.416 lat (usec) : 1000=0.17% 00:23:20.416 lat (msec) : 4=0.17%, 10=0.34%, 20=0.52%, 50=0.34%, 100=0.52% 00:23:20.416 lat (msec) : 250=1.55%, 500=96.03%, 750=0.34% 00:23:20.416 cpu : usr=0.21%, sys=0.26%, ctx=585, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job15: (groupid=0, jobs=1): err= 0: pid=82549: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10257msec); 0 zone resets 00:23:20.416 slat (usec): min=15, max=293, avg=52.72, stdev=27.79 00:23:20.416 clat (msec): min=28, max=507, avg=285.91, stdev=33.98 00:23:20.416 lat (msec): min=28, max=507, avg=285.96, stdev=33.98 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 116], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.416 | 99.99th=[ 510] 00:23:20.416 bw ( KiB/s): min=12800, max=14848, per=3.33%, avg=14260.55, stdev=504.71, samples=20 00:23:20.416 iops : min= 50, max= 58, avg=55.70, stdev= 1.98, samples=20 00:23:20.416 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.416 cpu : usr=0.13%, sys=0.22%, ctx=601, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job16: (groupid=0, jobs=1): err= 0: pid=82550: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10244msec); 0 zone resets 00:23:20.416 slat (usec): min=22, max=281, avg=61.28, stdev=33.21 00:23:20.416 clat (msec): min=32, max=508, avg=286.02, stdev=33.62 00:23:20.416 lat (msec): min=32, max=508, avg=286.08, stdev=33.62 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 120], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.416 | 99.99th=[ 510] 00:23:20.416 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.30, stdev=505.41, samples=20 00:23:20.416 iops : min= 50, max= 58, avg=55.60, stdev= 1.98, samples=20 00:23:20.416 lat (msec) : 50=0.35%, 100=0.35%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.416 cpu : usr=0.12%, sys=0.24%, ctx=629, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job17: (groupid=0, jobs=1): err= 0: pid=82551: Mon Jul 22 17:27:37 2024 00:23:20.416 write: IOPS=56, BW=14.0MiB/s (14.7MB/s)(144MiB/10260msec); 0 zone resets 00:23:20.416 slat (usec): min=26, max=410, avg=61.34, stdev=20.88 00:23:20.416 clat (msec): min=6, max=518, avg=284.96, stdev=39.20 00:23:20.416 lat (msec): min=6, max=518, avg=285.02, stdev=39.21 00:23:20.416 clat percentiles (msec): 00:23:20.416 | 1.00th=[ 73], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.416 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.416 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.416 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.416 | 99.99th=[ 518] 00:23:20.416 bw ( KiB/s): min=12800, max=15360, per=3.34%, avg=14333.10, stdev=575.51, samples=20 00:23:20.416 iops : min= 50, max= 60, avg=55.90, stdev= 2.27, samples=20 00:23:20.416 lat (msec) : 10=0.17%, 20=0.17%, 50=0.35%, 100=0.52%, 250=1.57% 00:23:20.416 lat (msec) : 500=96.87%, 750=0.35% 00:23:20.416 cpu : usr=0.24%, sys=0.19%, ctx=581, majf=0, minf=1 00:23:20.416 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.416 issued rwts: total=0,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.416 job18: (groupid=0, jobs=1): err= 0: pid=82552: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(143MiB/10250msec); 0 zone resets 00:23:20.417 slat (usec): min=17, max=134, avg=58.07, stdev=16.82 00:23:20.417 clat (msec): min=31, max=498, avg=285.72, stdev=32.93 00:23:20.417 lat (msec): min=31, max=498, avg=285.78, stdev=32.93 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 118], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 409], 99.50th=[ 464], 99.90th=[ 498], 99.95th=[ 498], 00:23:20.417 | 99.99th=[ 498] 00:23:20.417 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.30, stdev=532.01, samples=20 00:23:20.417 iops : min= 50, max= 58, avg=55.60, stdev= 2.09, samples=20 00:23:20.417 lat (msec) : 50=0.35%, 100=0.35%, 250=1.57%, 500=97.73% 00:23:20.417 cpu : usr=0.20%, sys=0.19%, ctx=581, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job19: (groupid=0, jobs=1): err= 0: pid=82553: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10256msec); 0 zone resets 00:23:20.417 slat (usec): min=14, max=112, avg=54.57, stdev=14.98 00:23:20.417 clat (msec): min=20, max=514, avg=285.87, stdev=35.23 00:23:20.417 lat (msec): min=20, max=514, avg=285.92, stdev=35.24 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 109], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 514], 99.95th=[ 514], 00:23:20.417 | 99.99th=[ 514] 00:23:20.417 bw ( KiB/s): min=12288, max=15329, per=3.33%, avg=14281.80, stdev=616.42, samples=20 00:23:20.417 iops : min= 48, max= 59, avg=55.70, stdev= 2.34, samples=20 00:23:20.417 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.417 cpu : usr=0.13%, sys=0.30%, ctx=573, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job20: (groupid=0, jobs=1): err= 0: pid=82554: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10263msec); 0 zone resets 00:23:20.417 slat (usec): min=25, max=6882, avg=64.31, stdev=286.17 00:23:20.417 clat (msec): min=16, max=511, avg=285.88, stdev=34.78 00:23:20.417 lat (msec): min=23, max=511, avg=285.95, stdev=34.70 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 111], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 426], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.417 | 99.99th=[ 510] 00:23:20.417 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14256.10, stdev=527.74, samples=20 00:23:20.417 iops : min= 50, max= 58, avg=55.55, stdev= 1.99, samples=20 00:23:20.417 lat (msec) : 20=0.17%, 50=0.17%, 100=0.52%, 250=1.40%, 500=97.56% 00:23:20.417 lat (msec) : 750=0.17% 00:23:20.417 cpu : usr=0.15%, sys=0.23%, ctx=588, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job21: (groupid=0, jobs=1): err= 0: pid=82555: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10259msec); 0 zone resets 00:23:20.417 slat (usec): min=29, max=2314, avg=65.78, stdev=95.33 00:23:20.417 clat (msec): min=26, max=509, avg=285.88, stdev=34.32 00:23:20.417 lat (msec): min=28, max=509, avg=285.94, stdev=34.29 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 113], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 422], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.417 | 99.99th=[ 510] 00:23:20.417 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14257.55, stdev=501.11, samples=20 00:23:20.417 iops : min= 50, max= 58, avg=55.60, stdev= 1.88, samples=20 00:23:20.417 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.417 cpu : usr=0.19%, sys=0.23%, ctx=577, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job22: (groupid=0, jobs=1): err= 0: pid=82556: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=56, BW=14.0MiB/s (14.7MB/s)(144MiB/10264msec); 0 zone resets 00:23:20.417 slat (usec): min=23, max=312, avg=55.50, stdev=27.12 00:23:20.417 clat (msec): min=6, max=517, avg=285.09, stdev=38.66 00:23:20.417 lat (msec): min=6, max=517, avg=285.14, stdev=38.67 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 79], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 518], 99.95th=[ 518], 00:23:20.417 | 99.99th=[ 518] 00:23:20.417 bw ( KiB/s): min=12288, max=14848, per=3.34%, avg=14333.05, stdev=597.65, samples=20 00:23:20.417 iops : min= 48, max= 58, avg=55.90, stdev= 2.31, samples=20 00:23:20.417 lat (msec) : 10=0.17%, 20=0.17%, 50=0.35%, 100=0.52%, 250=1.39% 00:23:20.417 lat (msec) : 500=97.22%, 750=0.17% 00:23:20.417 cpu : usr=0.18%, sys=0.19%, ctx=619, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job23: (groupid=0, jobs=1): err= 0: pid=82557: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10242msec); 0 zone resets 00:23:20.417 slat (usec): min=13, max=243, avg=49.98, stdev=19.87 00:23:20.417 clat (msec): min=32, max=505, avg=285.99, stdev=33.43 00:23:20.417 lat (msec): min=32, max=506, avg=286.04, stdev=33.43 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 121], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 418], 99.50th=[ 472], 99.90th=[ 506], 99.95th=[ 506], 00:23:20.417 | 99.99th=[ 506] 00:23:20.417 bw ( KiB/s): min=12800, max=14877, per=3.32%, avg=14257.80, stdev=508.83, samples=20 00:23:20.417 iops : min= 50, max= 58, avg=55.60, stdev= 2.04, samples=20 00:23:20.417 lat (msec) : 50=0.35%, 100=0.35%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.417 cpu : usr=0.17%, sys=0.19%, ctx=586, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job24: (groupid=0, jobs=1): err= 0: pid=82558: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10258msec); 0 zone resets 00:23:20.417 slat (usec): min=26, max=258, avg=51.91, stdev=22.74 00:23:20.417 clat (msec): min=27, max=508, avg=285.90, stdev=34.04 00:23:20.417 lat (msec): min=27, max=508, avg=285.95, stdev=34.04 00:23:20.417 clat percentiles (msec): 00:23:20.417 | 1.00th=[ 115], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.417 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.417 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.417 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.417 | 99.99th=[ 510] 00:23:20.417 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14259.20, stdev=505.90, samples=20 00:23:20.417 iops : min= 50, max= 58, avg=55.70, stdev= 1.98, samples=20 00:23:20.417 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.417 cpu : usr=0.17%, sys=0.18%, ctx=597, majf=0, minf=1 00:23:20.417 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.417 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.417 job25: (groupid=0, jobs=1): err= 0: pid=82559: Mon Jul 22 17:27:37 2024 00:23:20.417 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10245msec); 0 zone resets 00:23:20.417 slat (usec): min=18, max=265, avg=52.08, stdev=24.13 00:23:20.417 clat (msec): min=33, max=507, avg=286.08, stdev=33.44 00:23:20.417 lat (msec): min=33, max=507, avg=286.13, stdev=33.44 00:23:20.417 clat percentiles (msec): 00:23:20.418 | 1.00th=[ 122], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.418 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.418 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.418 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.418 | 99.99th=[ 510] 00:23:20.418 bw ( KiB/s): min=12800, max=15360, per=3.32%, avg=14256.30, stdev=505.41, samples=20 00:23:20.418 iops : min= 50, max= 60, avg=55.60, stdev= 1.98, samples=20 00:23:20.418 lat (msec) : 50=0.17%, 100=0.52%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.418 cpu : usr=0.16%, sys=0.20%, ctx=588, majf=0, minf=1 00:23:20.418 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.418 job26: (groupid=0, jobs=1): err= 0: pid=82560: Mon Jul 22 17:27:37 2024 00:23:20.418 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10244msec); 0 zone resets 00:23:20.418 slat (usec): min=14, max=177, avg=53.98, stdev=14.67 00:23:20.418 clat (msec): min=33, max=506, avg=286.04, stdev=33.33 00:23:20.418 lat (msec): min=33, max=506, avg=286.09, stdev=33.33 00:23:20.418 clat percentiles (msec): 00:23:20.418 | 1.00th=[ 122], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.418 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.418 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.418 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 506], 99.95th=[ 506], 00:23:20.418 | 99.99th=[ 506] 00:23:20.418 bw ( KiB/s): min=12800, max=15360, per=3.32%, avg=14257.80, stdev=584.49, samples=20 00:23:20.418 iops : min= 50, max= 60, avg=55.60, stdev= 2.37, samples=20 00:23:20.418 lat (msec) : 50=0.17%, 100=0.52%, 250=1.57%, 500=97.55%, 750=0.17% 00:23:20.418 cpu : usr=0.15%, sys=0.28%, ctx=573, majf=0, minf=1 00:23:20.418 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 issued rwts: total=0,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.418 job27: (groupid=0, jobs=1): err= 0: pid=82561: Mon Jul 22 17:27:37 2024 00:23:20.418 write: IOPS=55, BW=14.0MiB/s (14.7MB/s)(144MiB/10263msec); 0 zone resets 00:23:20.418 slat (usec): min=26, max=740, avg=51.44, stdev=38.54 00:23:20.418 clat (msec): min=15, max=512, avg=285.54, stdev=36.17 00:23:20.418 lat (msec): min=16, max=512, avg=285.59, stdev=36.16 00:23:20.418 clat percentiles (msec): 00:23:20.418 | 1.00th=[ 99], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.418 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.418 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.418 | 99.00th=[ 426], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 514], 00:23:20.418 | 99.99th=[ 514] 00:23:20.418 bw ( KiB/s): min=12800, max=14848, per=3.33%, avg=14280.35, stdev=491.93, samples=20 00:23:20.418 iops : min= 50, max= 58, avg=55.65, stdev= 1.84, samples=20 00:23:20.418 lat (msec) : 20=0.17%, 50=0.35%, 100=0.52%, 250=1.39%, 500=97.39% 00:23:20.418 lat (msec) : 750=0.17% 00:23:20.418 cpu : usr=0.17%, sys=0.19%, ctx=592, majf=0, minf=1 00:23:20.418 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 issued rwts: total=0,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.418 job28: (groupid=0, jobs=1): err= 0: pid=82562: Mon Jul 22 17:27:37 2024 00:23:20.418 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10258msec); 0 zone resets 00:23:20.418 slat (usec): min=24, max=1146, avg=49.20, stdev=49.25 00:23:20.418 clat (msec): min=27, max=507, avg=285.90, stdev=34.04 00:23:20.418 lat (msec): min=28, max=507, avg=285.95, stdev=34.03 00:23:20.418 clat percentiles (msec): 00:23:20.418 | 1.00th=[ 115], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.418 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.418 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 317], 00:23:20.418 | 99.00th=[ 422], 99.50th=[ 472], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.418 | 99.99th=[ 510] 00:23:20.418 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14259.20, stdev=505.90, samples=20 00:23:20.418 iops : min= 50, max= 58, avg=55.70, stdev= 1.98, samples=20 00:23:20.418 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.418 cpu : usr=0.13%, sys=0.19%, ctx=589, majf=0, minf=1 00:23:20.418 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.418 job29: (groupid=0, jobs=1): err= 0: pid=82563: Mon Jul 22 17:27:37 2024 00:23:20.418 write: IOPS=55, BW=14.0MiB/s (14.6MB/s)(143MiB/10259msec); 0 zone resets 00:23:20.418 slat (usec): min=28, max=2456, avg=59.79, stdev=101.24 00:23:20.418 clat (msec): min=26, max=509, avg=285.89, stdev=34.31 00:23:20.418 lat (msec): min=28, max=509, avg=285.95, stdev=34.28 00:23:20.418 clat percentiles (msec): 00:23:20.418 | 1.00th=[ 114], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 279], 00:23:20.418 | 30.00th=[ 284], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:23:20.418 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 317], 00:23:20.418 | 99.00th=[ 422], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:23:20.418 | 99.99th=[ 510] 00:23:20.418 bw ( KiB/s): min=12800, max=14848, per=3.32%, avg=14257.55, stdev=501.11, samples=20 00:23:20.418 iops : min= 50, max= 58, avg=55.60, stdev= 1.88, samples=20 00:23:20.418 lat (msec) : 50=0.35%, 100=0.52%, 250=1.40%, 500=97.56%, 750=0.17% 00:23:20.418 cpu : usr=0.16%, sys=0.27%, ctx=574, majf=0, minf=1 00:23:20.418 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=97.4%, 32=0.0%, >=64=0.0% 00:23:20.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.418 issued rwts: total=0,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.418 00:23:20.418 Run status group 0 (all jobs): 00:23:20.418 WRITE: bw=419MiB/s (439MB/s), 14.0MiB/s-14.1MiB/s (14.6MB/s-14.8MB/s), io=4301MiB (4510MB), run=10241-10270msec 00:23:20.418 00:23:20.418 Disk stats (read/write): 00:23:20.418 sda: ios=48/563, merge=0/0, ticks=118/159496, in_queue=159614, util=95.14% 00:23:20.418 sdb: ios=48/563, merge=0/0, ticks=158/159485, in_queue=159643, util=95.42% 00:23:20.418 sdc: ios=48/567, merge=0/0, ticks=156/159955, in_queue=160111, util=95.83% 00:23:20.418 sdd: ios=48/563, merge=0/0, ticks=162/159465, in_queue=159627, util=95.64% 00:23:20.418 sde: ios=48/564, merge=0/0, ticks=139/159603, in_queue=159743, util=95.90% 00:23:20.418 sdf: ios=48/567, merge=0/0, ticks=159/159930, in_queue=160089, util=96.06% 00:23:20.418 sdg: ios=48/563, merge=0/0, ticks=155/159498, in_queue=159654, util=96.00% 00:23:20.418 sdh: ios=37/563, merge=0/0, ticks=133/159486, in_queue=159618, util=96.31% 00:23:20.418 sdi: ios=32/570, merge=0/0, ticks=142/160023, in_queue=160165, util=96.60% 00:23:20.418 sdj: ios=20/564, merge=0/0, ticks=102/159638, in_queue=159741, util=96.27% 00:23:20.418 sdk: ios=21/566, merge=0/0, ticks=95/159940, in_queue=160035, util=96.53% 00:23:20.418 sdl: ios=19/563, merge=0/0, ticks=101/159496, in_queue=159598, util=96.45% 00:23:20.418 sdm: ios=0/563, merge=0/0, ticks=0/159449, in_queue=159449, util=96.37% 00:23:20.418 sdn: ios=0/566, merge=0/0, ticks=0/159947, in_queue=159946, util=96.80% 00:23:20.418 sdo: ios=0/572, merge=0/0, ticks=0/159858, in_queue=159857, util=96.96% 00:23:20.418 sdp: ios=0/564, merge=0/0, ticks=0/159646, in_queue=159646, util=97.21% 00:23:20.418 sdq: ios=0/563, merge=0/0, ticks=0/159467, in_queue=159467, util=97.16% 00:23:20.418 sdr: ios=0/567, merge=0/0, ticks=0/159870, in_queue=159871, util=97.69% 00:23:20.418 sds: ios=0/563, merge=0/0, ticks=0/159475, in_queue=159475, util=97.54% 00:23:20.418 sdt: ios=0/564, merge=0/0, ticks=0/159615, in_queue=159615, util=97.83% 00:23:20.418 sdu: ios=0/564, merge=0/0, ticks=0/159634, in_queue=159633, util=97.91% 00:23:20.418 sdv: ios=0/564, merge=0/0, ticks=0/159694, in_queue=159694, util=98.06% 00:23:20.418 sdw: ios=0/567, merge=0/0, ticks=0/159930, in_queue=159929, util=98.23% 00:23:20.418 sdx: ios=0/563, merge=0/0, ticks=0/159473, in_queue=159472, util=98.08% 00:23:20.418 sdy: ios=0/564, merge=0/0, ticks=0/159691, in_queue=159690, util=98.20% 00:23:20.418 sdz: ios=0/563, merge=0/0, ticks=0/159481, in_queue=159482, util=98.24% 00:23:20.418 sdaa: ios=0/562, merge=0/0, ticks=0/159254, in_queue=159255, util=98.30% 00:23:20.418 sdab: ios=0/565, merge=0/0, ticks=0/159686, in_queue=159685, util=98.59% 00:23:20.418 sdac: ios=0/564, merge=0/0, ticks=0/159661, in_queue=159661, util=98.64% 00:23:20.418 sdad: ios=0/564, merge=0/0, ticks=0/159696, in_queue=159696, util=98.92% 00:23:20.418 [2024-07-22 17:27:37.872039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.874810] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.877589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.880210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.883009] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:23:20.418 [2024-07-22 17:27:37.886872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.890486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.894091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 [2024-07-22 17:27:37.897416] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:20.418 [2024-07-22 17:27:37.900294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.418 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:23:20.418 [2024-07-22 17:27:37.903652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.419 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:23:20.419 Cleaning up iSCSI connection 00:23:20.419 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:23:20.419 17:27:37 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:23:20.419 [2024-07-22 17:27:37.908456] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:20.419 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:23:20.419 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:23:20.419 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:23:20.419 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:23:20.419 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:23:20.420 INFO: Removing lvol bdevs 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:23:20.420 17:27:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:23:20.420 [2024-07-22 17:27:39.093680] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (395eee02-8ec1-4d43-9a2d-1a999857a7af) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:20.420 INFO: lvol bdev lvs0/lbd_1 removed 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:23:20.420 [2024-07-22 17:27:39.325795] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8f986147-15f5-4611-952b-27f49d91e49b) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:20.420 INFO: lvol bdev lvs0/lbd_2 removed 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:23:20.420 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:23:20.677 [2024-07-22 17:27:39.561964] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (81c79a2c-8f30-4109-a2ef-4aaf132d3882) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:20.677 INFO: lvol bdev lvs0/lbd_3 removed 00:23:20.677 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:23:20.677 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:20.677 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:23:20.677 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:23:20.935 [2024-07-22 17:27:39.802185] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (fccff222-7c45-44f0-b0cd-3e0c91f9f42e) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:20.935 INFO: lvol bdev lvs0/lbd_4 removed 00:23:20.935 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:23:20.935 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:20.935 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:23:20.935 17:27:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:23:21.195 [2024-07-22 17:27:40.034294] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (98eaf437-78fe-4ba9-9195-a69747af6e20) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:21.195 INFO: lvol bdev lvs0/lbd_5 removed 00:23:21.195 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:23:21.195 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:21.195 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:23:21.195 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:23:21.457 [2024-07-22 17:27:40.270529] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3b7f4ebc-52b0-4e8a-9d03-da6cd16001a6) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:21.457 INFO: lvol bdev lvs0/lbd_6 removed 00:23:21.457 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:23:21.457 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:21.457 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:23:21.457 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:23:21.715 [2024-07-22 17:27:40.506611] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (28d62a9f-fcb4-4ed1-8711-a540c850cd9d) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:21.715 INFO: lvol bdev lvs0/lbd_7 removed 00:23:21.716 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:23:21.716 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:21.716 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:23:21.716 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:23:21.974 [2024-07-22 17:27:40.750854] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4a5e5873-37b5-4159-ba74-fefd8e520794) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:21.974 INFO: lvol bdev lvs0/lbd_8 removed 00:23:21.974 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:23:21.974 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:21.974 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:23:21.974 17:27:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:23:22.232 [2024-07-22 17:27:40.983082] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0e116c10-be5e-4473-8b30-e895d74a9afc) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:22.232 INFO: lvol bdev lvs0/lbd_9 removed 00:23:22.232 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:23:22.232 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:22.232 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:23:22.232 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:23:22.490 [2024-07-22 17:27:41.223170] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b5da14bf-227b-4810-9637-2236f0ca508b) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:22.490 INFO: lvol bdev lvs0/lbd_10 removed 00:23:22.490 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:23:22.490 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:22.490 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:23:22.490 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:23:22.748 [2024-07-22 17:27:41.467405] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (20f8252d-f273-4502-8e59-de2f3524bc5a) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:22.748 INFO: lvol bdev lvs0/lbd_11 removed 00:23:22.748 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:23:22.748 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:22.748 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:23:22.748 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:23:23.006 [2024-07-22 17:27:41.703513] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ef7b70b7-be3a-4ea0-aecc-55cf4a3d0e03) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:23.006 INFO: lvol bdev lvs0/lbd_12 removed 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:23:23.006 [2024-07-22 17:27:41.935698] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e21c58b3-8bf5-4b44-b00e-91fca26ad680) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:23.006 INFO: lvol bdev lvs0/lbd_13 removed 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:23:23.006 17:27:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:23:23.264 [2024-07-22 17:27:42.171774] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d6846494-5d41-4a8f-a69c-0fcba421e070) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:23.264 INFO: lvol bdev lvs0/lbd_14 removed 00:23:23.264 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:23:23.264 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:23.264 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:23:23.264 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:23:23.523 [2024-07-22 17:27:42.403850] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (55b94659-258a-4531-8c58-29e4ac5b1bcb) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:23.523 INFO: lvol bdev lvs0/lbd_15 removed 00:23:23.523 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:23:23.523 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:23.523 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:23:23.523 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:23:23.781 [2024-07-22 17:27:42.640012] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b9baf384-900f-47e7-b117-41a9d9972109) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:23.781 INFO: lvol bdev lvs0/lbd_16 removed 00:23:23.781 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:23:23.781 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:23.781 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:23:23.781 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:23:24.040 [2024-07-22 17:27:42.872276] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b7a5d8e8-834e-4896-8ee9-de1183921dae) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:24.040 INFO: lvol bdev lvs0/lbd_17 removed 00:23:24.040 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:23:24.040 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:24.040 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:23:24.040 17:27:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:23:24.298 [2024-07-22 17:27:43.108374] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5fb5d684-e8a6-4e1a-94d7-6ceb11800523) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:24.298 INFO: lvol bdev lvs0/lbd_18 removed 00:23:24.298 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:23:24.298 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:24.298 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:23:24.298 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:23:24.558 [2024-07-22 17:27:43.344764] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4ca78959-94f1-429b-a6a0-a128204bd81c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:24.558 INFO: lvol bdev lvs0/lbd_19 removed 00:23:24.558 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:23:24.558 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:24.558 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:23:24.558 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:23:24.816 [2024-07-22 17:27:43.572835] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (25b3e514-d79d-401e-b6cd-8b6059bd655c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:24.816 INFO: lvol bdev lvs0/lbd_20 removed 00:23:24.816 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:23:24.816 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:24.816 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:23:24.816 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:23:25.074 [2024-07-22 17:27:43.809052] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f267bcfc-0b7f-49e4-afa8-c9591e2c09cd) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:25.074 INFO: lvol bdev lvs0/lbd_21 removed 00:23:25.074 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:23:25.074 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:25.074 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:23:25.074 17:27:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:23:25.332 [2024-07-22 17:27:44.045168] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d59d82c6-4749-4bf9-914d-73654ce9654c) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:25.332 INFO: lvol bdev lvs0/lbd_22 removed 00:23:25.332 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:23:25.332 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:25.332 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:23:25.332 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:23:25.332 [2024-07-22 17:27:44.273268] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0cbcb6f0-fd0a-4cae-8615-295560dc6981) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:25.590 INFO: lvol bdev lvs0/lbd_23 removed 00:23:25.590 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:23:25.591 [2024-07-22 17:27:44.513785] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (18d5e6bc-4cba-4012-b01f-3f3d420a7127) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:25.591 INFO: lvol bdev lvs0/lbd_24 removed 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:23:25.591 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:23:25.848 [2024-07-22 17:27:44.749900] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (24253da9-8ae4-4151-9a6d-c8ee65524fbc) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:25.848 INFO: lvol bdev lvs0/lbd_25 removed 00:23:25.848 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:23:25.848 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:25.848 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:23:25.848 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:23:26.106 [2024-07-22 17:27:44.981994] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b7f6aab8-739a-4f15-ae0d-27e4b2b8a597) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:26.106 INFO: lvol bdev lvs0/lbd_26 removed 00:23:26.106 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:23:26.106 17:27:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:26.106 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:23:26.106 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:23:26.364 [2024-07-22 17:27:45.214371] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2e8bab81-2cc5-4c68-9a07-cdcce5f261cd) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:26.364 INFO: lvol bdev lvs0/lbd_27 removed 00:23:26.364 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:23:26.364 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:26.364 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:23:26.364 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:23:26.621 [2024-07-22 17:27:45.458432] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (dfc34f5a-c118-4ff7-ac1d-8e4c1e2a42af) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:26.621 INFO: lvol bdev lvs0/lbd_28 removed 00:23:26.621 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:23:26.621 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:26.621 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:23:26.621 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:23:26.879 [2024-07-22 17:27:45.706736] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7b291f12-589d-46b7-89f6-e30ecbe113e7) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:26.879 INFO: lvol bdev lvs0/lbd_29 removed 00:23:26.879 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:23:26.879 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:23:26.879 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:23:26.879 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:23:27.137 [2024-07-22 17:27:45.954850] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (37875f1f-740b-4007-87bf-73dd59391556) received event(SPDK_BDEV_EVENT_REMOVE) 00:23:27.137 INFO: lvol bdev lvs0/lbd_30 removed 00:23:27.137 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:23:27.137 17:27:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:23:28.070 INFO: Removing lvol stores 00:23:28.070 17:27:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:23:28.070 17:27:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:23:28.636 INFO: lvol store lvs0 removed 00:23:28.636 INFO: Removing NVMe 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 80676 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 80676 ']' 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 80676 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.636 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80676 00:23:28.895 killing process with pid 80676 00:23:28.895 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.895 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.895 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80676' 00:23:28.895 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 80676 00:23:28.895 17:27:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 80676 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:31.423 00:23:31.423 real 0m52.392s 00:23:31.423 user 1m5.697s 00:23:31.423 sys 0m12.713s 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.423 ************************************ 00:23:31.423 END TEST iscsi_tgt_multiconnection 00:23:31.423 ************************************ 00:23:31.423 17:27:49 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:23:31.423 17:27:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:23:31.423 17:27:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:23:31.423 17:27:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:31.423 17:27:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.423 17:27:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:31.423 ************************************ 00:23:31.423 START TEST iscsi_tgt_ext4test 00:23:31.423 ************************************ 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:23:31.423 * Looking for test storage... 00:23:31.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:23:31.423 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=83133 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 83133' 00:23:31.424 Process pid: 83133 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 83133 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 83133 ']' 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.424 17:27:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:31.424 [2024-07-22 17:27:50.135048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:31.424 [2024-07-22 17:27:50.135295] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83133 ] 00:23:31.424 [2024-07-22 17:27:50.309857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.682 [2024-07-22 17:27:50.569639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.246 17:27:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.246 17:27:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:23:32.246 17:27:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:23:32.504 17:27:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:33.876 17:27:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:33.876 17:27:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:33.876 17:27:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:23:34.809 Malloc0 00:23:34.809 iscsi_tgt is listening. Running tests... 00:23:34.809 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:34.809 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:23:34.809 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:34.809 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:23:34.809 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:23:35.067 17:27:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:23:35.325 17:27:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:23:35.582 true 00:23:35.582 17:27:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:23:35.840 17:27:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:23:36.796 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:23:36.796 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:23:36.796 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:23:36.796 [2024-07-22 17:27:55.730770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:23:36.796 Test error injection 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:23:36.796 17:27:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:23:37.361 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:37.361 mke2fs 1.46.5 (30-Dec-2021) 00:23:37.619 Discarding device blocks: 0/131072 done 00:23:37.877 Warning: could not erase sector 2: Input/output error 00:23:37.877 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:37.877 Filesystem UUID: 5459fd2f-9c48-40ce-bbb3-fd9752db4b00 00:23:37.877 Superblock backups stored on blocks: 00:23:37.877 32768, 98304 00:23:37.877 00:23:37.877 Allocating group tables: 0/4 done 00:23:37.877 Warning: could not read block 0: Input/output error 00:23:37.877 Warning: could not erase sector 0: Input/output error 00:23:37.877 Writing inode tables: 0/4 done 00:23:38.155 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:38.155 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:23:38.155 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:23:38.155 17:27:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:38.155 [2024-07-22 17:27:56.904746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:39.112 17:27:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:39.112 mke2fs 1.46.5 (30-Dec-2021) 00:23:39.370 Discarding device blocks: 0/131072 done 00:23:39.370 Warning: could not erase sector 2: Input/output error 00:23:39.370 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:39.370 Filesystem UUID: 0c4eb022-a766-4aab-827c-36e2ef74c7c9 00:23:39.370 Superblock backups stored on blocks: 00:23:39.370 32768, 98304 00:23:39.370 00:23:39.370 Allocating group tables: 0/4 done 00:23:39.628 Warning: could not read block 0: Input/output error 00:23:39.628 Warning: could not erase sector 0: Input/output error 00:23:39.628 Writing inode tables: 0/4 done 00:23:39.628 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:39.628 17:27:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:23:39.628 17:27:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:23:39.628 17:27:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:39.628 [2024-07-22 17:27:58.501874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.562 17:27:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:40.562 mke2fs 1.46.5 (30-Dec-2021) 00:23:41.078 Discarding device blocks: 0/131072 done 00:23:41.078 Warning: could not erase sector 2: Input/output error 00:23:41.078 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:41.078 Filesystem UUID: 4b2cf9ff-f8b8-49ca-a23b-f8001791adea 00:23:41.078 Superblock backups stored on blocks: 00:23:41.078 32768, 98304 00:23:41.078 00:23:41.078 Allocating group tables: 0/4 done 00:23:41.078 Warning: could not read block 0: Input/output error 00:23:41.078 Warning: could not erase sector 0: Input/output error 00:23:41.078 Writing inode tables: 0/4 done 00:23:41.336 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:41.336 17:28:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:23:41.336 17:28:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:23:41.336 17:28:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:41.336 [2024-07-22 17:28:00.099516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:42.273 17:28:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:42.274 mke2fs 1.46.5 (30-Dec-2021) 00:23:42.531 Discarding device blocks: 0/131072 done 00:23:42.788 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:42.788 Filesystem UUID: ede0b619-4cc2-4474-9662-5555ad9228c3 00:23:42.789 Superblock backups stored on blocks: 00:23:42.789 32768, 98304 00:23:42.789 00:23:42.789 Allocating group tables: 0/4 done 00:23:42.789 Warning: could not erase sector 2: Input/output error 00:23:42.789 Warning: could not read block 0: Input/output error 00:23:42.789 Warning: could not erase sector 0: Input/output error 00:23:42.789 Writing inode tables: 0/4 done 00:23:43.052 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:43.052 17:28:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:23:43.052 17:28:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:23:43.052 17:28:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:43.052 [2024-07-22 17:28:01.789121] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:44.002 17:28:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:44.002 mke2fs 1.46.5 (30-Dec-2021) 00:23:44.261 Discarding device blocks: 0/131072 done 00:23:44.261 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:44.261 Filesystem UUID: 6b89aa6a-eef9-42ef-b9aa-6c17c96e0dc1 00:23:44.261 Superblock backups stored on blocks: 00:23:44.261 32768, 98304 00:23:44.261 00:23:44.261 Allocating group tables: 0/4 done 00:23:44.261 Warning: could not erase sector 2: Input/output error 00:23:44.520 Warning: could not read block 0: Input/output error 00:23:44.520 Warning: could not erase sector 0: Input/output error 00:23:44.520 Writing inode tables: 0/4 done 00:23:44.520 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:44.520 17:28:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:23:44.520 17:28:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:23:44.520 17:28:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:44.520 [2024-07-22 17:28:03.382393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:45.457 17:28:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:45.457 mke2fs 1.46.5 (30-Dec-2021) 00:23:45.716 Discarding device blocks: 0/131072 done 00:23:45.975 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:45.975 Filesystem UUID: 4e36ce5e-bc47-457c-ab1e-fd095342eb08 00:23:45.975 Superblock backups stored on blocks: 00:23:45.975 32768, 98304 00:23:45.975 00:23:45.975 Allocating group tables: 0/4 done 00:23:45.975 Warning: could not erase sector 2: Input/output error 00:23:45.975 Warning: could not read block 0: Input/output error 00:23:45.975 Warning: could not erase sector 0: Input/output error 00:23:45.975 Writing inode tables: 0/4 done 00:23:46.234 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:46.234 17:28:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:23:46.234 17:28:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:23:46.234 17:28:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:46.234 [2024-07-22 17:28:04.982043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:47.170 17:28:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:47.170 mke2fs 1.46.5 (30-Dec-2021) 00:23:47.428 Discarding device blocks: 0/131072 done 00:23:47.428 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:47.428 Filesystem UUID: 7a9cdc71-90fa-4457-aa19-abbc95418213 00:23:47.428 Superblock backups stored on blocks: 00:23:47.428 32768, 98304 00:23:47.428 00:23:47.428 Allocating group tables: 0/4 done 00:23:47.428 Warning: could not erase sector 2: Input/output error 00:23:47.686 Warning: could not read block 0: Input/output error 00:23:47.686 Warning: could not erase sector 0: Input/output error 00:23:47.686 Writing inode tables: 0/4 done 00:23:47.944 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:47.944 17:28:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:23:47.944 17:28:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:23:47.944 17:28:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:47.944 [2024-07-22 17:28:06.669707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:48.878 17:28:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:48.878 mke2fs 1.46.5 (30-Dec-2021) 00:23:49.136 Discarding device blocks: 0/131072 done 00:23:49.136 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:49.136 Filesystem UUID: f939a805-527f-4567-bd1e-c8b901388952 00:23:49.136 Superblock backups stored on blocks: 00:23:49.136 32768, 98304 00:23:49.136 00:23:49.136 Allocating group tables: 0/4 done 00:23:49.136 Warning: could not erase sector 2: Input/output error 00:23:49.396 Warning: could not read block 0: Input/output error 00:23:49.396 Warning: could not erase sector 0: Input/output error 00:23:49.396 Writing inode tables: 0/4 done 00:23:49.396 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:49.396 17:28:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:23:49.396 17:28:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:23:49.396 17:28:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:49.396 [2024-07-22 17:28:08.264479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:50.331 17:28:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:50.332 mke2fs 1.46.5 (30-Dec-2021) 00:23:50.591 Discarding device blocks: 0/131072 done 00:23:50.851 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:50.851 Filesystem UUID: 5da6e412-8e67-445d-846e-167460b65586 00:23:50.851 Superblock backups stored on blocks: 00:23:50.851 32768, 98304 00:23:50.851 00:23:50.851 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:23:50.851 done 00:23:50.851 Warning: could not read block 0: Input/output error 00:23:50.851 Warning: could not erase sector 0: Input/output error 00:23:50.851 Writing inode tables: 0/4 done 00:23:51.109 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:23:51.109 17:28:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:23:51.109 17:28:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:23:51.109 17:28:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:51.109 [2024-07-22 17:28:09.862189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:52.045 17:28:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:52.045 mke2fs 1.46.5 (30-Dec-2021) 00:23:52.303 Discarding device blocks: 0/131072 done 00:23:52.303 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:52.303 Filesystem UUID: 417ed11c-869c-4549-9f6c-022ed4025058 00:23:52.303 Superblock backups stored on blocks: 00:23:52.303 32768, 98304 00:23:52.303 00:23:52.303 Allocating group tables: 0/4 done 00:23:52.303 Writing inode tables: 0/4 done 00:23:52.303 Creating journal (4096 blocks): done 00:23:52.303 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:52.303 17:28:11 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:23:52.303 17:28:11 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:23:52.303 [2024-07-22 17:28:11.191841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:52.303 17:28:11 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:53.678 17:28:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:53.678 mke2fs 1.46.5 (30-Dec-2021) 00:23:53.678 Discarding device blocks: 0/131072 done 00:23:53.678 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:53.678 Filesystem UUID: cc463b63-75db-4b4a-b69d-7ecc3a645b1f 00:23:53.678 Superblock backups stored on blocks: 00:23:53.678 32768, 98304 00:23:53.678 00:23:53.678 Allocating group tables: 0/4 done 00:23:53.678 Writing inode tables: 0/4 done 00:23:53.678 Creating journal (4096 blocks): done 00:23:53.678 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:53.678 17:28:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:23:53.678 17:28:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:23:53.678 17:28:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:53.678 [2024-07-22 17:28:12.542222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:54.613 17:28:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:54.613 mke2fs 1.46.5 (30-Dec-2021) 00:23:54.870 Discarding device blocks: 0/131072 done 00:23:54.870 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:54.870 Filesystem UUID: bc814087-a89d-4e73-8891-994f748ffa48 00:23:54.870 Superblock backups stored on blocks: 00:23:54.870 32768, 98304 00:23:54.870 00:23:54.870 Allocating group tables: 0/4 done 00:23:54.870 Writing inode tables: 0/4 done 00:23:55.128 Creating journal (4096 blocks): done 00:23:55.129 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:55.129 17:28:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:23:55.129 17:28:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:23:55.129 17:28:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:55.129 [2024-07-22 17:28:13.883197] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:56.068 17:28:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:56.068 mke2fs 1.46.5 (30-Dec-2021) 00:23:56.334 Discarding device blocks: 0/131072 done 00:23:56.334 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:56.334 Filesystem UUID: 2693abc1-70de-4678-8be6-cb487de7e62d 00:23:56.334 Superblock backups stored on blocks: 00:23:56.334 32768, 98304 00:23:56.334 00:23:56.334 Allocating group tables: 0/4 done 00:23:56.334 Writing inode tables: 0/4 done 00:23:56.334 Creating journal (4096 blocks): done 00:23:56.334 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:56.334 17:28:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:23:56.334 17:28:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:23:56.334 17:28:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:56.334 [2024-07-22 17:28:15.232777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:57.710 17:28:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:57.710 mke2fs 1.46.5 (30-Dec-2021) 00:23:57.710 Discarding device blocks: 0/131072 done 00:23:57.710 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:57.710 Filesystem UUID: 17d6821e-74e5-4835-b396-3a8e5a87e736 00:23:57.710 Superblock backups stored on blocks: 00:23:57.710 32768, 98304 00:23:57.710 00:23:57.710 Allocating group tables: 0/4 done 00:23:57.710 Writing inode tables: 0/4 done 00:23:57.710 Creating journal (4096 blocks): done 00:23:57.710 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:57.710 [2024-07-22 17:28:16.577424] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:57.710 17:28:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:23:57.710 17:28:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:23:57.710 17:28:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:23:58.644 17:28:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:23:58.644 mke2fs 1.46.5 (30-Dec-2021) 00:23:58.903 Discarding device blocks: 0/131072 done 00:23:58.903 Creating filesystem with 131072 4k blocks and 32768 inodes 00:23:58.903 Filesystem UUID: b30f6b52-9643-47d8-b4a0-6ac9a840021c 00:23:58.903 Superblock backups stored on blocks: 00:23:58.903 32768, 98304 00:23:58.903 00:23:58.903 Allocating group tables: 0/4 done 00:23:58.903 Writing inode tables: 0/4 done 00:23:59.161 Creating journal (4096 blocks): done 00:23:59.161 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:23:59.161 17:28:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:23:59.161 [2024-07-22 17:28:17.931216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:59.161 17:28:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:23:59.161 17:28:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:24:00.096 17:28:18 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:24:00.096 mke2fs 1.46.5 (30-Dec-2021) 00:24:00.353 Discarding device blocks: 0/131072 done 00:24:00.353 Creating filesystem with 131072 4k blocks and 32768 inodes 00:24:00.353 Filesystem UUID: de2a81fb-7773-4586-a3fa-792ad46bdb7b 00:24:00.353 Superblock backups stored on blocks: 00:24:00.353 32768, 98304 00:24:00.353 00:24:00.353 Allocating group tables: 0/4 done 00:24:00.353 Writing inode tables: 0/4 done 00:24:00.353 Creating journal (4096 blocks): done 00:24:00.353 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:24:00.353 mkfs failed as expected 00:24:00.353 Cleaning up iSCSI connection 00:24:00.353 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:24:00.353 [2024-07-22 17:28:19.284239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.353 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:24:00.353 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:24:00.353 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:24:00.353 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:24:00.354 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:24:00.612 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:24:00.612 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:24:00.612 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:24:00.612 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:24:00.612 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:24:00.870 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:24:01.128 Error injection test done 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bdev_name=Nvme0n1 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # local bs 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # local nb 00:24:01.128 17:28:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:24:01.386 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:01.386 { 00:24:01.386 "name": "Nvme0n1", 00:24:01.386 "aliases": [ 00:24:01.386 "41d6b968-3929-4e34-8876-90d87c6e75f2" 00:24:01.386 ], 00:24:01.386 "product_name": "NVMe disk", 00:24:01.386 "block_size": 4096, 00:24:01.386 "num_blocks": 1310720, 00:24:01.386 "uuid": "41d6b968-3929-4e34-8876-90d87c6e75f2", 00:24:01.386 "assigned_rate_limits": { 00:24:01.386 "rw_ios_per_sec": 0, 00:24:01.386 "rw_mbytes_per_sec": 0, 00:24:01.386 "r_mbytes_per_sec": 0, 00:24:01.386 "w_mbytes_per_sec": 0 00:24:01.386 }, 00:24:01.386 "claimed": false, 00:24:01.386 "zoned": false, 00:24:01.386 "supported_io_types": { 00:24:01.386 "read": true, 00:24:01.386 "write": true, 00:24:01.386 "unmap": true, 00:24:01.386 "flush": true, 00:24:01.386 "reset": true, 00:24:01.386 "nvme_admin": true, 00:24:01.386 "nvme_io": true, 00:24:01.386 "nvme_io_md": false, 00:24:01.386 "write_zeroes": true, 00:24:01.386 "zcopy": false, 00:24:01.386 "get_zone_info": false, 00:24:01.386 "zone_management": false, 00:24:01.386 "zone_append": false, 00:24:01.386 "compare": true, 00:24:01.386 "compare_and_write": false, 00:24:01.386 "abort": true, 00:24:01.386 "seek_hole": false, 00:24:01.386 "seek_data": false, 00:24:01.386 "copy": true, 00:24:01.386 "nvme_iov_md": false 00:24:01.386 }, 00:24:01.386 "driver_specific": { 00:24:01.387 "nvme": [ 00:24:01.387 { 00:24:01.387 "pci_address": "0000:00:10.0", 00:24:01.387 "trid": { 00:24:01.387 "trtype": "PCIe", 00:24:01.387 "traddr": "0000:00:10.0" 00:24:01.387 }, 00:24:01.387 "ctrlr_data": { 00:24:01.387 "cntlid": 0, 00:24:01.387 "vendor_id": "0x1b36", 00:24:01.387 "model_number": "QEMU NVMe Ctrl", 00:24:01.387 "serial_number": "12340", 00:24:01.387 "firmware_revision": "8.0.0", 00:24:01.387 "subnqn": "nqn.2019-08.org.qemu:12340", 00:24:01.387 "oacs": { 00:24:01.387 "security": 0, 00:24:01.387 "format": 1, 00:24:01.387 "firmware": 0, 00:24:01.387 "ns_manage": 1 00:24:01.387 }, 00:24:01.387 "multi_ctrlr": false, 00:24:01.387 "ana_reporting": false 00:24:01.387 }, 00:24:01.387 "vs": { 00:24:01.387 "nvme_version": "1.4" 00:24:01.387 }, 00:24:01.387 "ns_data": { 00:24:01.387 "id": 1, 00:24:01.387 "can_share": false 00:24:01.387 } 00:24:01.387 } 00:24:01.387 ], 00:24:01.387 "mp_policy": "active_passive" 00:24:01.387 } 00:24:01.387 } 00:24:01.387 ]' 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # bs=4096 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1388 -- # echo 5120 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:24:01.387 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:24:01.645 Nvme0n1p0 Nvme0n1p1 00:24:01.645 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:24:01.904 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:01.904 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:24:01.904 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:01.904 [2024-07-22 17:28:20.825494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:24:01.904 17:28:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:24:01.904 mke2fs 1.46.5 (30-Dec-2021) 00:24:01.904 Discarding device blocks: 0/655360 done 00:24:01.904 Creating filesystem with 655360 4k blocks and 163840 inodes 00:24:01.904 Filesystem UUID: fb75de25-0450-4cf9-80cb-200e8b46c848 00:24:01.904 Superblock backups stored on blocks: 00:24:01.904 32768, 98304, 163840, 229376, 294912 00:24:01.904 00:24:01.904 Allocating group tables: 0/20 done 00:24:01.904 Writing inode tables: 0/20 done 00:24:02.471 Creating journal (16384 blocks): done 00:24:02.471 Writing superblocks and filesystem accounting information: 0/20 done 00:24:02.471 00:24:02.471 17:28:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:24:02.471 [2024-07-22 17:28:21.326240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.471 17:28:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:24:02.471 17:28:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:24:02.471 17:28:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:25:53.973 17:30:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:25:53.973 make: Entering directory '/mnt/sdadir/spdk' 00:26:40.634 make[1]: Nothing to be done for 'clean'. 00:26:40.634 make: Leaving directory '/mnt/sdadir/spdk' 00:26:40.634 17:30:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:26:40.634 17:30:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:26:40.634 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:26:40.634 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:26:58.710 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:27:20.645 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:27:20.645 Creating mk/config.mk...done. 00:27:20.645 Creating mk/cc.flags.mk...done. 00:27:20.904 Type 'make' to build. 00:27:20.904 17:31:39 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:27:20.904 make: Entering directory '/mnt/sdadir/spdk' 00:27:21.162 make[1]: Nothing to be done for 'all'. 00:27:53.229 The Meson build system 00:27:53.229 Version: 1.3.1 00:27:53.229 Source dir: /mnt/sdadir/spdk/dpdk 00:27:53.229 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:27:53.229 Build type: native build 00:27:53.229 Program cat found: YES (/usr/bin/cat) 00:27:53.229 Project name: DPDK 00:27:53.229 Project version: 24.03.0 00:27:53.229 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:27:53.229 C linker for the host machine: cc ld.bfd 2.39-16 00:27:53.229 Host machine cpu family: x86_64 00:27:53.229 Host machine cpu: x86_64 00:27:53.229 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:53.229 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:27:53.229 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:27:53.229 Program python3 found: YES (/usr/bin/python3) 00:27:53.229 Program cat found: YES (/usr/bin/cat) 00:27:53.229 Compiler for C supports arguments -march=native: YES 00:27:53.229 Checking for size of "void *" : 8 00:27:53.229 Checking for size of "void *" : 8 (cached) 00:27:53.229 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:27:53.229 Library m found: YES 00:27:53.229 Library numa found: YES 00:27:53.229 Has header "numaif.h" : YES 00:27:53.229 Library fdt found: NO 00:27:53.229 Library execinfo found: NO 00:27:53.229 Has header "execinfo.h" : YES 00:27:53.229 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:27:53.229 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:53.229 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:53.229 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:53.229 Run-time dependency openssl found: YES 3.0.9 00:27:53.229 Run-time dependency libpcap found: YES 1.10.4 00:27:53.229 Has header "pcap.h" with dependency libpcap: YES 00:27:53.229 Compiler for C supports arguments -Wcast-qual: YES 00:27:53.229 Compiler for C supports arguments -Wdeprecated: YES 00:27:53.229 Compiler for C supports arguments -Wformat: YES 00:27:53.229 Compiler for C supports arguments -Wformat-nonliteral: YES 00:27:53.229 Compiler for C supports arguments -Wformat-security: YES 00:27:53.229 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:53.229 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:53.229 Compiler for C supports arguments -Wnested-externs: YES 00:27:53.229 Compiler for C supports arguments -Wold-style-definition: YES 00:27:53.229 Compiler for C supports arguments -Wpointer-arith: YES 00:27:53.229 Compiler for C supports arguments -Wsign-compare: YES 00:27:53.229 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:53.229 Compiler for C supports arguments -Wundef: YES 00:27:53.229 Compiler for C supports arguments -Wwrite-strings: YES 00:27:53.229 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:53.229 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:53.229 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:53.229 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:53.229 Program objdump found: YES (/usr/bin/objdump) 00:27:53.229 Compiler for C supports arguments -mavx512f: YES 00:27:53.229 Checking if "AVX512 checking" compiles: YES 00:27:53.229 Fetching value of define "__SSE4_2__" : 1 00:27:53.229 Fetching value of define "__AES__" : 1 00:27:53.229 Fetching value of define "__AVX__" : 1 00:27:53.229 Fetching value of define "__AVX2__" : 1 00:27:53.229 Fetching value of define "__AVX512BW__" : (undefined) 00:27:53.229 Fetching value of define "__AVX512CD__" : (undefined) 00:27:53.229 Fetching value of define "__AVX512DQ__" : (undefined) 00:27:53.229 Fetching value of define "__AVX512F__" : (undefined) 00:27:53.229 Fetching value of define "__AVX512VL__" : (undefined) 00:27:53.229 Fetching value of define "__PCLMUL__" : 1 00:27:53.229 Fetching value of define "__RDRND__" : 1 00:27:53.229 Fetching value of define "__RDSEED__" : 1 00:27:53.229 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:53.229 Fetching value of define "__znver1__" : (undefined) 00:27:53.229 Fetching value of define "__znver2__" : (undefined) 00:27:53.229 Fetching value of define "__znver3__" : (undefined) 00:27:53.229 Fetching value of define "__znver4__" : (undefined) 00:27:53.229 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:53.229 Checking for function "getentropy" : NO 00:27:53.229 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:53.229 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:27:53.229 Compiler for C supports arguments -mpclmul: YES 00:27:53.229 Compiler for C supports arguments -maes: YES 00:27:53.229 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:53.229 Compiler for C supports arguments -mavx512bw: YES 00:27:53.229 Compiler for C supports arguments -mavx512dq: YES 00:27:53.229 Compiler for C supports arguments -mavx512vl: YES 00:27:53.229 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:53.229 Compiler for C supports arguments -mavx2: YES 00:27:53.229 Compiler for C supports arguments -mavx: YES 00:27:53.229 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:53.229 Has header "linux/userfaultfd.h" : YES 00:27:53.229 Has header "linux/vduse.h" : YES 00:27:53.229 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:53.229 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:27:53.229 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:27:53.229 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:27:53.229 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:27:53.229 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:27:53.229 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:27:53.229 Program doxygen found: YES (/usr/bin/doxygen) 00:27:53.229 Configuring doxy-api-html.conf using configuration 00:27:53.229 Configuring doxy-api-man.conf using configuration 00:27:53.229 Program mandb found: YES (/usr/bin/mandb) 00:27:53.229 Program sphinx-build found: NO 00:27:53.229 Configuring rte_build_config.h using configuration 00:27:53.229 Message: 00:27:53.229 ================= 00:27:53.229 Applications Enabled 00:27:53.229 ================= 00:27:53.229 00:27:53.229 apps: 00:27:53.229 00:27:53.229 00:27:53.229 Message: 00:27:53.229 ================= 00:27:53.229 Libraries Enabled 00:27:53.229 ================= 00:27:53.230 00:27:53.230 libs: 00:27:53.230 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:27:53.230 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:27:53.230 cryptodev, dmadev, power, reorder, security, vhost, 00:27:53.230 00:27:53.230 Message: 00:27:53.230 =============== 00:27:53.230 Drivers Enabled 00:27:53.230 =============== 00:27:53.230 00:27:53.230 common: 00:27:53.230 00:27:53.230 bus: 00:27:53.230 pci, vdev, 00:27:53.230 mempool: 00:27:53.230 ring, 00:27:53.230 dma: 00:27:53.230 00:27:53.230 net: 00:27:53.230 00:27:53.230 crypto: 00:27:53.230 00:27:53.230 compress: 00:27:53.230 00:27:53.230 vdpa: 00:27:53.230 00:27:53.230 00:27:53.230 Message: 00:27:53.230 ================= 00:27:53.230 Content Skipped 00:27:53.230 ================= 00:27:53.230 00:27:53.230 apps: 00:27:53.230 dumpcap: explicitly disabled via build config 00:27:53.230 graph: explicitly disabled via build config 00:27:53.230 pdump: explicitly disabled via build config 00:27:53.230 proc-info: explicitly disabled via build config 00:27:53.230 test-acl: explicitly disabled via build config 00:27:53.230 test-bbdev: explicitly disabled via build config 00:27:53.230 test-cmdline: explicitly disabled via build config 00:27:53.230 test-compress-perf: explicitly disabled via build config 00:27:53.230 test-crypto-perf: explicitly disabled via build config 00:27:53.230 test-dma-perf: explicitly disabled via build config 00:27:53.230 test-eventdev: explicitly disabled via build config 00:27:53.230 test-fib: explicitly disabled via build config 00:27:53.230 test-flow-perf: explicitly disabled via build config 00:27:53.230 test-gpudev: explicitly disabled via build config 00:27:53.230 test-mldev: explicitly disabled via build config 00:27:53.230 test-pipeline: explicitly disabled via build config 00:27:53.230 test-pmd: explicitly disabled via build config 00:27:53.230 test-regex: explicitly disabled via build config 00:27:53.230 test-sad: explicitly disabled via build config 00:27:53.230 test-security-perf: explicitly disabled via build config 00:27:53.230 00:27:53.230 libs: 00:27:53.230 argparse: explicitly disabled via build config 00:27:53.230 metrics: explicitly disabled via build config 00:27:53.230 acl: explicitly disabled via build config 00:27:53.230 bbdev: explicitly disabled via build config 00:27:53.230 bitratestats: explicitly disabled via build config 00:27:53.230 bpf: explicitly disabled via build config 00:27:53.230 cfgfile: explicitly disabled via build config 00:27:53.230 distributor: explicitly disabled via build config 00:27:53.230 efd: explicitly disabled via build config 00:27:53.230 eventdev: explicitly disabled via build config 00:27:53.230 dispatcher: explicitly disabled via build config 00:27:53.230 gpudev: explicitly disabled via build config 00:27:53.230 gro: explicitly disabled via build config 00:27:53.230 gso: explicitly disabled via build config 00:27:53.230 ip_frag: explicitly disabled via build config 00:27:53.230 jobstats: explicitly disabled via build config 00:27:53.230 latencystats: explicitly disabled via build config 00:27:53.230 lpm: explicitly disabled via build config 00:27:53.230 member: explicitly disabled via build config 00:27:53.230 pcapng: explicitly disabled via build config 00:27:53.230 rawdev: explicitly disabled via build config 00:27:53.230 regexdev: explicitly disabled via build config 00:27:53.230 mldev: explicitly disabled via build config 00:27:53.230 rib: explicitly disabled via build config 00:27:53.230 sched: explicitly disabled via build config 00:27:53.230 stack: explicitly disabled via build config 00:27:53.230 ipsec: explicitly disabled via build config 00:27:53.230 pdcp: explicitly disabled via build config 00:27:53.230 fib: explicitly disabled via build config 00:27:53.230 port: explicitly disabled via build config 00:27:53.230 pdump: explicitly disabled via build config 00:27:53.230 table: explicitly disabled via build config 00:27:53.230 pipeline: explicitly disabled via build config 00:27:53.230 graph: explicitly disabled via build config 00:27:53.230 node: explicitly disabled via build config 00:27:53.230 00:27:53.230 drivers: 00:27:53.230 common/cpt: not in enabled drivers build config 00:27:53.230 common/dpaax: not in enabled drivers build config 00:27:53.230 common/iavf: not in enabled drivers build config 00:27:53.230 common/idpf: not in enabled drivers build config 00:27:53.230 common/ionic: not in enabled drivers build config 00:27:53.230 common/mvep: not in enabled drivers build config 00:27:53.230 common/octeontx: not in enabled drivers build config 00:27:53.230 bus/auxiliary: not in enabled drivers build config 00:27:53.230 bus/cdx: not in enabled drivers build config 00:27:53.230 bus/dpaa: not in enabled drivers build config 00:27:53.230 bus/fslmc: not in enabled drivers build config 00:27:53.230 bus/ifpga: not in enabled drivers build config 00:27:53.230 bus/platform: not in enabled drivers build config 00:27:53.230 bus/uacce: not in enabled drivers build config 00:27:53.230 bus/vmbus: not in enabled drivers build config 00:27:53.230 common/cnxk: not in enabled drivers build config 00:27:53.230 common/mlx5: not in enabled drivers build config 00:27:53.230 common/nfp: not in enabled drivers build config 00:27:53.230 common/nitrox: not in enabled drivers build config 00:27:53.230 common/qat: not in enabled drivers build config 00:27:53.230 common/sfc_efx: not in enabled drivers build config 00:27:53.230 mempool/bucket: not in enabled drivers build config 00:27:53.230 mempool/cnxk: not in enabled drivers build config 00:27:53.230 mempool/dpaa: not in enabled drivers build config 00:27:53.230 mempool/dpaa2: not in enabled drivers build config 00:27:53.230 mempool/octeontx: not in enabled drivers build config 00:27:53.230 mempool/stack: not in enabled drivers build config 00:27:53.230 dma/cnxk: not in enabled drivers build config 00:27:53.230 dma/dpaa: not in enabled drivers build config 00:27:53.230 dma/dpaa2: not in enabled drivers build config 00:27:53.230 dma/hisilicon: not in enabled drivers build config 00:27:53.230 dma/idxd: not in enabled drivers build config 00:27:53.230 dma/ioat: not in enabled drivers build config 00:27:53.230 dma/skeleton: not in enabled drivers build config 00:27:53.230 net/af_packet: not in enabled drivers build config 00:27:53.230 net/af_xdp: not in enabled drivers build config 00:27:53.230 net/ark: not in enabled drivers build config 00:27:53.230 net/atlantic: not in enabled drivers build config 00:27:53.230 net/avp: not in enabled drivers build config 00:27:53.230 net/axgbe: not in enabled drivers build config 00:27:53.230 net/bnx2x: not in enabled drivers build config 00:27:53.230 net/bnxt: not in enabled drivers build config 00:27:53.230 net/bonding: not in enabled drivers build config 00:27:53.230 net/cnxk: not in enabled drivers build config 00:27:53.230 net/cpfl: not in enabled drivers build config 00:27:53.230 net/cxgbe: not in enabled drivers build config 00:27:53.230 net/dpaa: not in enabled drivers build config 00:27:53.230 net/dpaa2: not in enabled drivers build config 00:27:53.230 net/e1000: not in enabled drivers build config 00:27:53.230 net/ena: not in enabled drivers build config 00:27:53.230 net/enetc: not in enabled drivers build config 00:27:53.230 net/enetfec: not in enabled drivers build config 00:27:53.230 net/enic: not in enabled drivers build config 00:27:53.230 net/failsafe: not in enabled drivers build config 00:27:53.230 net/fm10k: not in enabled drivers build config 00:27:53.230 net/gve: not in enabled drivers build config 00:27:53.230 net/hinic: not in enabled drivers build config 00:27:53.230 net/hns3: not in enabled drivers build config 00:27:53.230 net/i40e: not in enabled drivers build config 00:27:53.230 net/iavf: not in enabled drivers build config 00:27:53.230 net/ice: not in enabled drivers build config 00:27:53.230 net/idpf: not in enabled drivers build config 00:27:53.230 net/igc: not in enabled drivers build config 00:27:53.230 net/ionic: not in enabled drivers build config 00:27:53.230 net/ipn3ke: not in enabled drivers build config 00:27:53.230 net/ixgbe: not in enabled drivers build config 00:27:53.230 net/mana: not in enabled drivers build config 00:27:53.230 net/memif: not in enabled drivers build config 00:27:53.230 net/mlx4: not in enabled drivers build config 00:27:53.230 net/mlx5: not in enabled drivers build config 00:27:53.230 net/mvneta: not in enabled drivers build config 00:27:53.230 net/mvpp2: not in enabled drivers build config 00:27:53.230 net/netvsc: not in enabled drivers build config 00:27:53.230 net/nfb: not in enabled drivers build config 00:27:53.230 net/nfp: not in enabled drivers build config 00:27:53.230 net/ngbe: not in enabled drivers build config 00:27:53.230 net/null: not in enabled drivers build config 00:27:53.230 net/octeontx: not in enabled drivers build config 00:27:53.230 net/octeon_ep: not in enabled drivers build config 00:27:53.230 net/pcap: not in enabled drivers build config 00:27:53.230 net/pfe: not in enabled drivers build config 00:27:53.230 net/qede: not in enabled drivers build config 00:27:53.230 net/ring: not in enabled drivers build config 00:27:53.230 net/sfc: not in enabled drivers build config 00:27:53.230 net/softnic: not in enabled drivers build config 00:27:53.230 net/tap: not in enabled drivers build config 00:27:53.230 net/thunderx: not in enabled drivers build config 00:27:53.230 net/txgbe: not in enabled drivers build config 00:27:53.230 net/vdev_netvsc: not in enabled drivers build config 00:27:53.230 net/vhost: not in enabled drivers build config 00:27:53.230 net/virtio: not in enabled drivers build config 00:27:53.230 net/vmxnet3: not in enabled drivers build config 00:27:53.230 raw/*: missing internal dependency, "rawdev" 00:27:53.230 crypto/armv8: not in enabled drivers build config 00:27:53.230 crypto/bcmfs: not in enabled drivers build config 00:27:53.230 crypto/caam_jr: not in enabled drivers build config 00:27:53.230 crypto/ccp: not in enabled drivers build config 00:27:53.230 crypto/cnxk: not in enabled drivers build config 00:27:53.230 crypto/dpaa_sec: not in enabled drivers build config 00:27:53.230 crypto/dpaa2_sec: not in enabled drivers build config 00:27:53.230 crypto/ipsec_mb: not in enabled drivers build config 00:27:53.230 crypto/mlx5: not in enabled drivers build config 00:27:53.230 crypto/mvsam: not in enabled drivers build config 00:27:53.231 crypto/nitrox: not in enabled drivers build config 00:27:53.231 crypto/null: not in enabled drivers build config 00:27:53.231 crypto/octeontx: not in enabled drivers build config 00:27:53.231 crypto/openssl: not in enabled drivers build config 00:27:53.231 crypto/scheduler: not in enabled drivers build config 00:27:53.231 crypto/uadk: not in enabled drivers build config 00:27:53.231 crypto/virtio: not in enabled drivers build config 00:27:53.231 compress/isal: not in enabled drivers build config 00:27:53.231 compress/mlx5: not in enabled drivers build config 00:27:53.231 compress/nitrox: not in enabled drivers build config 00:27:53.231 compress/octeontx: not in enabled drivers build config 00:27:53.231 compress/zlib: not in enabled drivers build config 00:27:53.231 regex/*: missing internal dependency, "regexdev" 00:27:53.231 ml/*: missing internal dependency, "mldev" 00:27:53.231 vdpa/ifc: not in enabled drivers build config 00:27:53.231 vdpa/mlx5: not in enabled drivers build config 00:27:53.231 vdpa/nfp: not in enabled drivers build config 00:27:53.231 vdpa/sfc: not in enabled drivers build config 00:27:53.231 event/*: missing internal dependency, "eventdev" 00:27:53.231 baseband/*: missing internal dependency, "bbdev" 00:27:53.231 gpu/*: missing internal dependency, "gpudev" 00:27:53.231 00:27:53.231 00:27:53.231 Build targets in project: 61 00:27:53.231 00:27:53.231 DPDK 24.03.0 00:27:53.231 00:27:53.231 User defined options 00:27:53.231 default_library : static 00:27:53.231 libdir : lib 00:27:53.231 prefix : /mnt/sdadir/spdk/dpdk/build 00:27:53.231 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:27:53.231 c_link_args : 00:27:53.231 cpu_instruction_set: native 00:27:53.231 disable_apps : test-eventdev,test,test-mldev,graph,test-cmdline,test-fib,test-gpudev,test-security-perf,dumpcap,proc-info,test-pipeline,test-dma-perf,test-pmd,test-crypto-perf,test-flow-perf,pdump,test-bbdev,test-regex,test-acl,test-compress-perf,test-sad 00:27:53.231 disable_libs : stack,node,argparse,pipeline,jobstats,eventdev,latencystats,graph,lpm,rib,rawdev,efd,regexdev,ipsec,ip_frag,gpudev,acl,member,port,mldev,gso,dispatcher,fib,gro,pdcp,bbdev,sched,bpf,table,pdump,pcapng,distributor,bitratestats,metrics,cfgfile 00:27:53.231 enable_docs : false 00:27:53.231 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:27:53.231 enable_kmods : false 00:27:53.231 max_lcores : 128 00:27:53.231 tests : false 00:27:53.231 00:27:53.231 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:53.231 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:27:53.231 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:27:53.231 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:27:53.231 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:53.231 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:53.231 [5/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:53.231 [6/244] Linking static target lib/librte_log.a 00:27:53.231 [7/244] Linking static target lib/librte_kvargs.a 00:27:53.231 [8/244] Linking target lib/librte_log.so.24.1 00:27:53.231 [9/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:53.231 [10/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:53.489 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:53.489 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:53.747 [13/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:53.747 [14/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:53.747 [15/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:27:53.748 [16/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:53.748 [17/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:53.748 [18/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:53.748 [19/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:54.005 [20/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:54.005 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:54.005 [22/244] Linking target lib/librte_kvargs.so.24.1 00:27:54.005 [23/244] Linking static target lib/librte_telemetry.a 00:27:54.005 [24/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:54.005 [25/244] Linking target lib/librte_telemetry.so.24.1 00:27:54.263 [26/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:27:54.263 [27/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:27:54.522 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:54.522 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:54.522 [30/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:54.522 [31/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:54.779 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:54.779 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:54.779 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:54.779 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:54.779 [36/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:55.037 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:55.037 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:55.037 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:55.037 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:55.037 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:55.037 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:55.296 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:55.296 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:55.554 [45/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:55.554 [46/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:55.554 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:55.814 [48/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:55.814 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:56.072 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:56.072 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:56.072 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:56.072 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:56.072 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:56.072 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:56.072 [56/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:56.330 [57/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:56.330 [58/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:56.330 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:56.330 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:56.330 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:56.896 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:56.896 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:56.896 [64/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:56.896 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:57.154 [66/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:57.154 [67/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:57.154 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:57.154 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:57.154 [70/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:57.154 [71/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:57.413 [72/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:57.413 [73/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:57.671 [74/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:57.671 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:57.671 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:57.671 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:57.930 [78/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:57.930 [79/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:58.188 [80/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:58.188 [81/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:58.188 [82/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:58.188 [83/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:58.188 [84/244] Linking static target lib/librte_ring.a 00:27:58.446 [85/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:58.446 [86/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:58.705 [87/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:58.705 [88/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:58.705 [89/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:58.705 [90/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:58.963 [91/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:58.963 [92/244] Linking static target lib/librte_mempool.a 00:27:58.963 [93/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:58.963 [94/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:59.222 [95/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:59.222 [96/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:59.222 [97/244] Linking static target lib/librte_rcu.a 00:27:59.222 [98/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:59.480 [99/244] Linking static target lib/librte_mbuf.a 00:27:59.480 [100/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:59.480 [101/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:59.480 [102/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:59.480 [103/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:59.739 [104/244] Linking static target lib/librte_meter.a 00:27:59.739 [105/244] Linking static target lib/librte_net.a 00:27:59.739 [106/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:59.739 [107/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:59.999 [108/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:59.999 [109/244] Linking target lib/librte_eal.so.24.1 00:27:59.999 [110/244] Linking static target lib/librte_eal.a 00:28:00.257 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:28:00.257 [112/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:28:00.515 [113/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:28:00.516 [114/244] Linking target lib/librte_ring.so.24.1 00:28:00.516 [115/244] Linking target lib/librte_meter.so.24.1 00:28:00.516 [116/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:28:00.516 [117/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:28:00.774 [118/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:28:00.774 [119/244] Linking target lib/librte_rcu.so.24.1 00:28:00.774 [120/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:28:00.774 [121/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:28:01.033 [122/244] Linking target lib/librte_mempool.so.24.1 00:28:01.033 [123/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:28:01.033 [124/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:28:01.033 [125/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:28:01.033 [126/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:28:01.033 [127/244] Linking static target lib/librte_pci.a 00:28:01.291 [128/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:28:01.291 [129/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:28:01.291 [130/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:28:01.291 [131/244] Linking target lib/librte_pci.so.24.1 00:28:01.291 [132/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:28:01.549 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:28:01.549 [134/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:28:01.549 [135/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:28:01.549 [136/244] Linking target lib/librte_mbuf.so.24.1 00:28:01.549 [137/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:28:01.549 [138/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:28:01.852 [139/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:28:01.852 [140/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:28:01.852 [141/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:28:01.852 [142/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:28:01.852 [143/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:28:01.852 [144/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:28:01.852 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:28:01.852 [146/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:28:02.110 [147/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:28:02.110 [148/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:28:02.110 [149/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:28:02.110 [150/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:28:02.110 [151/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:28:02.110 [152/244] Linking target lib/librte_net.so.24.1 00:28:02.110 [153/244] Linking static target lib/librte_cmdline.a 00:28:02.369 [154/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:28:02.627 [155/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:28:02.627 [156/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:28:02.885 [157/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:28:02.885 [158/244] Linking target lib/librte_cmdline.so.24.1 00:28:03.143 [159/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:28:03.143 [160/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:28:03.143 [161/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:28:03.143 [162/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:28:03.143 [163/244] Linking static target lib/librte_timer.a 00:28:03.143 [164/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:28:03.401 [165/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:28:03.401 [166/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:28:03.401 [167/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:28:03.401 [168/244] Linking target lib/librte_timer.so.24.1 00:28:03.401 [169/244] Linking static target lib/librte_compressdev.a 00:28:03.401 [170/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:28:03.401 [171/244] Linking target lib/librte_compressdev.so.24.1 00:28:03.659 [172/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:28:03.659 [173/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:28:03.930 [174/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:28:03.930 [175/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:28:03.930 [176/244] Linking static target lib/librte_dmadev.a 00:28:04.213 [177/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:28:04.213 [178/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:28:04.213 [179/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:28:04.213 [180/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:28:04.213 [181/244] Linking target lib/librte_dmadev.so.24.1 00:28:04.213 [182/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:28:04.213 [183/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:28:04.471 [184/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:28:04.471 [185/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:28:04.730 [186/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:28:04.987 [187/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:28:04.987 [188/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:28:04.987 [189/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:28:04.987 [190/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:28:05.244 [191/244] Linking target lib/librte_ethdev.so.24.1 00:28:05.244 [192/244] Linking static target lib/librte_reorder.a 00:28:05.244 [193/244] Linking static target lib/librte_hash.a 00:28:05.244 [194/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:28:05.244 [195/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:28:05.245 [196/244] Linking static target lib/librte_power.a 00:28:05.245 [197/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:28:05.245 [198/244] Linking static target lib/librte_security.a 00:28:05.245 [199/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:28:05.502 [200/244] Linking target lib/librte_reorder.so.24.1 00:28:05.502 [201/244] Linking target lib/librte_hash.so.24.1 00:28:05.502 [202/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:28:05.502 [203/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:28:05.502 [204/244] Linking target lib/librte_cryptodev.so.24.1 00:28:05.502 [205/244] Linking static target lib/librte_cryptodev.a 00:28:05.760 [206/244] Linking target lib/librte_power.so.24.1 00:28:05.760 [207/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:28:05.760 [208/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:28:05.760 [209/244] Linking static target lib/librte_ethdev.a 00:28:06.019 [210/244] Linking target lib/librte_security.so.24.1 00:28:06.277 [211/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:28:06.277 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:28:06.535 [213/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:28:06.535 [214/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:28:06.535 [215/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:28:06.793 [216/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:28:06.793 [217/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:28:06.793 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:28:07.051 [219/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:28:07.051 [220/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:28:07.051 [221/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:28:07.051 [222/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:28:07.051 [223/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:28:07.309 [224/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:28:07.309 [225/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:28:07.309 [226/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:07.567 [227/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:07.567 [228/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:07.567 [229/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:07.567 [230/244] Linking static target drivers/librte_bus_vdev.a 00:28:07.567 [231/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:28:07.567 [232/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:28:07.567 [233/244] Linking target drivers/librte_bus_vdev.so.24.1 00:28:07.567 [234/244] Linking static target drivers/librte_bus_pci.a 00:28:07.567 [235/244] Linking target drivers/librte_bus_pci.so.24.1 00:28:07.824 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:28:07.824 [237/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:07.824 [238/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:08.083 [239/244] Linking static target drivers/librte_mempool_ring.a 00:28:08.083 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:28:09.983 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:28:19.971 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:28:19.971 [243/244] Linking target lib/librte_vhost.so.24.1 00:28:19.971 [244/244] Linking static target lib/librte_vhost.a 00:28:19.971 INFO: autodetecting backend as ninja 00:28:19.971 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:28:26.537 CC lib/log/log.o 00:28:26.537 CC lib/log/log_flags.o 00:28:26.537 CC lib/log/log_deprecated.o 00:28:26.537 CC lib/ut_mock/mock.o 00:28:26.537 LIB libspdk_ut_mock.a 00:28:26.537 LIB libspdk_log.a 00:28:26.537 CC lib/ioat/ioat.o 00:28:26.537 CC lib/dma/dma.o 00:28:26.537 CXX lib/trace_parser/trace.o 00:28:26.795 CC lib/util/base64.o 00:28:26.795 CC lib/util/bit_array.o 00:28:26.795 CC lib/util/crc16.o 00:28:26.795 CC lib/util/crc32.o 00:28:26.795 CC lib/util/cpuset.o 00:28:26.795 CC lib/util/crc32_ieee.o 00:28:26.795 CC lib/util/crc32c.o 00:28:26.795 CC lib/util/crc64.o 00:28:26.795 CC lib/util/dif.o 00:28:26.795 CC lib/util/fd.o 00:28:26.795 CC lib/util/fd_group.o 00:28:26.795 CC lib/util/file.o 00:28:26.795 CC lib/util/hexlify.o 00:28:26.795 CC lib/util/iov.o 00:28:26.795 CC lib/util/math.o 00:28:26.795 CC lib/util/net.o 00:28:26.795 CC lib/util/pipe.o 00:28:26.795 CC lib/util/strerror_tls.o 00:28:26.795 CC lib/util/string.o 00:28:26.795 CC lib/util/uuid.o 00:28:26.795 CC lib/util/xor.o 00:28:26.795 CC lib/util/zipf.o 00:28:27.053 CC lib/vfio_user/host/vfio_user_pci.o 00:28:27.054 CC lib/vfio_user/host/vfio_user.o 00:28:27.312 LIB libspdk_dma.a 00:28:27.569 LIB libspdk_ioat.a 00:28:27.569 LIB libspdk_vfio_user.a 00:28:28.135 LIB libspdk_trace_parser.a 00:28:28.135 LIB libspdk_util.a 00:28:29.067 CC lib/vmd/vmd.o 00:28:29.067 CC lib/vmd/led.o 00:28:29.067 CC lib/conf/conf.o 00:28:29.067 CC lib/env_dpdk/env.o 00:28:29.067 CC lib/json/json_parse.o 00:28:29.067 CC lib/json/json_write.o 00:28:29.067 CC lib/env_dpdk/memory.o 00:28:29.067 CC lib/json/json_util.o 00:28:29.067 CC lib/env_dpdk/pci.o 00:28:29.067 CC lib/env_dpdk/init.o 00:28:29.067 CC lib/env_dpdk/threads.o 00:28:29.067 CC lib/env_dpdk/pci_ioat.o 00:28:29.067 CC lib/env_dpdk/pci_virtio.o 00:28:29.067 CC lib/env_dpdk/pci_vmd.o 00:28:29.067 CC lib/env_dpdk/pci_idxd.o 00:28:29.067 CC lib/env_dpdk/pci_event.o 00:28:29.067 CC lib/env_dpdk/sigbus_handler.o 00:28:29.067 CC lib/env_dpdk/pci_dpdk.o 00:28:29.067 CC lib/env_dpdk/pci_dpdk_2207.o 00:28:29.067 CC lib/env_dpdk/pci_dpdk_2211.o 00:28:30.001 LIB libspdk_conf.a 00:28:30.001 LIB libspdk_json.a 00:28:30.001 LIB libspdk_vmd.a 00:28:30.567 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:28:30.567 CC lib/jsonrpc/jsonrpc_server.o 00:28:30.567 CC lib/jsonrpc/jsonrpc_client.o 00:28:30.567 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:28:31.132 LIB libspdk_jsonrpc.a 00:28:31.132 LIB libspdk_env_dpdk.a 00:28:31.698 CC lib/rpc/rpc.o 00:28:31.955 LIB libspdk_rpc.a 00:28:32.212 CC lib/keyring/keyring.o 00:28:32.212 CC lib/keyring/keyring_rpc.o 00:28:32.212 CC lib/trace/trace.o 00:28:32.212 CC lib/trace/trace_rpc.o 00:28:32.212 CC lib/trace/trace_flags.o 00:28:32.212 CC lib/notify/notify.o 00:28:32.469 CC lib/notify/notify_rpc.o 00:28:32.726 LIB libspdk_notify.a 00:28:32.726 LIB libspdk_keyring.a 00:28:32.984 LIB libspdk_trace.a 00:28:33.242 CC lib/thread/thread.o 00:28:33.242 CC lib/thread/iobuf.o 00:28:33.242 CC lib/sock/sock.o 00:28:33.242 CC lib/sock/sock_rpc.o 00:28:33.807 LIB libspdk_sock.a 00:28:34.371 CC lib/nvme/nvme_ctrlr.o 00:28:34.371 CC lib/nvme/nvme_ctrlr_cmd.o 00:28:34.371 CC lib/nvme/nvme_fabric.o 00:28:34.371 CC lib/nvme/nvme_ns.o 00:28:34.371 CC lib/nvme/nvme_ns_cmd.o 00:28:34.371 CC lib/nvme/nvme_pcie.o 00:28:34.371 CC lib/nvme/nvme_pcie_common.o 00:28:34.371 CC lib/nvme/nvme.o 00:28:34.371 CC lib/nvme/nvme_qpair.o 00:28:34.371 CC lib/nvme/nvme_quirks.o 00:28:34.371 CC lib/nvme/nvme_transport.o 00:28:34.371 CC lib/nvme/nvme_discovery.o 00:28:34.371 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:28:34.371 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:28:34.371 CC lib/nvme/nvme_tcp.o 00:28:34.371 CC lib/nvme/nvme_opal.o 00:28:34.371 CC lib/nvme/nvme_io_msg.o 00:28:34.371 CC lib/nvme/nvme_poll_group.o 00:28:34.371 CC lib/nvme/nvme_zns.o 00:28:34.371 CC lib/nvme/nvme_stubs.o 00:28:34.371 CC lib/nvme/nvme_auth.o 00:28:34.371 CC lib/nvme/nvme_cuse.o 00:28:35.304 LIB libspdk_thread.a 00:28:36.767 CC lib/blob/blobstore.o 00:28:36.767 CC lib/blob/request.o 00:28:36.767 CC lib/blob/zeroes.o 00:28:36.767 CC lib/init/json_config.o 00:28:36.767 CC lib/init/subsystem_rpc.o 00:28:36.767 CC lib/blob/blob_bs_dev.o 00:28:36.767 CC lib/init/rpc.o 00:28:36.767 CC lib/init/subsystem.o 00:28:36.767 CC lib/virtio/virtio.o 00:28:36.767 CC lib/accel/accel.o 00:28:36.767 CC lib/virtio/virtio_vhost_user.o 00:28:36.767 CC lib/virtio/virtio_vfio_user.o 00:28:36.767 CC lib/accel/accel_rpc.o 00:28:36.767 CC lib/virtio/virtio_pci.o 00:28:36.767 CC lib/accel/accel_sw.o 00:28:37.332 LIB libspdk_init.a 00:28:37.590 LIB libspdk_virtio.a 00:28:38.155 CC lib/event/app.o 00:28:38.155 CC lib/event/log_rpc.o 00:28:38.155 CC lib/event/reactor.o 00:28:38.155 CC lib/event/app_rpc.o 00:28:38.155 CC lib/event/scheduler_static.o 00:28:38.412 LIB libspdk_accel.a 00:28:38.412 LIB libspdk_nvme.a 00:28:38.976 LIB libspdk_event.a 00:28:39.234 CC lib/bdev/bdev_rpc.o 00:28:39.234 CC lib/bdev/bdev.o 00:28:39.234 CC lib/bdev/bdev_zone.o 00:28:39.234 CC lib/bdev/part.o 00:28:39.234 CC lib/bdev/scsi_nvme.o 00:28:41.134 LIB libspdk_blob.a 00:28:42.067 CC lib/lvol/lvol.o 00:28:42.067 CC lib/blobfs/blobfs.o 00:28:42.067 CC lib/blobfs/tree.o 00:28:42.633 LIB libspdk_bdev.a 00:28:43.601 LIB libspdk_lvol.a 00:28:43.601 LIB libspdk_blobfs.a 00:28:44.175 CC lib/nbd/nbd.o 00:28:44.175 CC lib/nbd/nbd_rpc.o 00:28:44.175 CC lib/scsi/dev.o 00:28:44.175 CC lib/scsi/lun.o 00:28:44.175 CC lib/scsi/scsi.o 00:28:44.175 CC lib/nvmf/ctrlr.o 00:28:44.175 CC lib/scsi/port.o 00:28:44.175 CC lib/scsi/scsi_pr.o 00:28:44.175 CC lib/nvmf/ctrlr_discovery.o 00:28:44.175 CC lib/scsi/scsi_rpc.o 00:28:44.175 CC lib/scsi/scsi_bdev.o 00:28:44.175 CC lib/nvmf/ctrlr_bdev.o 00:28:44.175 CC lib/scsi/task.o 00:28:44.175 CC lib/ftl/ftl_core.o 00:28:44.175 CC lib/nvmf/subsystem.o 00:28:44.175 CC lib/nvmf/nvmf.o 00:28:44.175 CC lib/ftl/ftl_init.o 00:28:44.175 CC lib/ftl/ftl_layout.o 00:28:44.175 CC lib/nvmf/nvmf_rpc.o 00:28:44.175 CC lib/nvmf/transport.o 00:28:44.175 CC lib/ftl/ftl_debug.o 00:28:44.175 CC lib/nvmf/tcp.o 00:28:44.175 CC lib/nvmf/stubs.o 00:28:44.175 CC lib/ftl/ftl_io.o 00:28:44.175 CC lib/ftl/ftl_sb.o 00:28:44.175 CC lib/nvmf/mdns_server.o 00:28:44.175 CC lib/ftl/ftl_l2p.o 00:28:44.175 CC lib/nvmf/auth.o 00:28:44.175 CC lib/ftl/ftl_l2p_flat.o 00:28:44.175 CC lib/ftl/ftl_nv_cache.o 00:28:44.432 CC lib/ftl/ftl_band.o 00:28:44.432 CC lib/ftl/ftl_band_ops.o 00:28:44.432 CC lib/ftl/ftl_writer.o 00:28:44.432 CC lib/ftl/ftl_rq.o 00:28:44.432 CC lib/ftl/ftl_reloc.o 00:28:44.432 CC lib/ftl/ftl_l2p_cache.o 00:28:44.432 CC lib/ftl/ftl_p2l.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_startup.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_md.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_misc.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_band.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:28:44.432 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:28:44.432 CC lib/ftl/utils/ftl_conf.o 00:28:44.432 CC lib/ftl/utils/ftl_md.o 00:28:44.433 CC lib/ftl/utils/ftl_mempool.o 00:28:44.433 CC lib/ftl/utils/ftl_bitmap.o 00:28:44.433 CC lib/ftl/utils/ftl_property.o 00:28:44.433 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:28:44.433 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:28:44.433 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:28:44.433 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:28:44.433 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:28:44.433 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:28:44.433 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:28:44.690 CC lib/ftl/upgrade/ftl_sb_v3.o 00:28:44.690 CC lib/ftl/upgrade/ftl_sb_v5.o 00:28:44.690 CC lib/ftl/nvc/ftl_nvc_dev.o 00:28:44.690 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:28:44.690 CC lib/ftl/base/ftl_base_dev.o 00:28:44.690 CC lib/ftl/base/ftl_base_bdev.o 00:28:46.587 LIB libspdk_nbd.a 00:28:46.845 LIB libspdk_scsi.a 00:28:47.102 LIB libspdk_ftl.a 00:28:47.667 CC lib/iscsi/conn.o 00:28:47.667 CC lib/iscsi/init_grp.o 00:28:47.667 CC lib/iscsi/iscsi.o 00:28:47.667 CC lib/vhost/vhost_rpc.o 00:28:47.667 CC lib/iscsi/md5.o 00:28:47.667 CC lib/vhost/vhost.o 00:28:47.667 CC lib/iscsi/param.o 00:28:47.667 CC lib/vhost/vhost_scsi.o 00:28:47.667 CC lib/vhost/vhost_blk.o 00:28:47.667 CC lib/iscsi/portal_grp.o 00:28:47.667 CC lib/iscsi/iscsi_subsystem.o 00:28:47.667 CC lib/vhost/rte_vhost_user.o 00:28:47.667 CC lib/iscsi/tgt_node.o 00:28:47.667 CC lib/iscsi/iscsi_rpc.o 00:28:47.667 CC lib/iscsi/task.o 00:28:47.926 LIB libspdk_nvmf.a 00:28:49.823 LIB libspdk_vhost.a 00:28:50.388 LIB libspdk_iscsi.a 00:28:55.675 CC module/env_dpdk/env_dpdk_rpc.o 00:28:55.675 CC module/scheduler/gscheduler/gscheduler.o 00:28:55.675 CC module/sock/posix/posix.o 00:28:55.675 CC module/keyring/file/keyring.o 00:28:55.675 CC module/keyring/file/keyring_rpc.o 00:28:55.675 CC module/accel/error/accel_error.o 00:28:55.675 CC module/keyring/linux/keyring.o 00:28:55.675 CC module/keyring/linux/keyring_rpc.o 00:28:55.675 CC module/accel/error/accel_error_rpc.o 00:28:55.675 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:28:55.675 CC module/blob/bdev/blob_bdev.o 00:28:55.675 CC module/accel/ioat/accel_ioat.o 00:28:55.676 CC module/accel/ioat/accel_ioat_rpc.o 00:28:55.676 CC module/scheduler/dynamic/scheduler_dynamic.o 00:28:55.676 LIB libspdk_env_dpdk_rpc.a 00:28:55.676 LIB libspdk_scheduler_gscheduler.a 00:28:55.676 LIB libspdk_keyring_file.a 00:28:55.676 LIB libspdk_keyring_linux.a 00:28:55.676 LIB libspdk_accel_error.a 00:28:55.676 LIB libspdk_scheduler_dpdk_governor.a 00:28:55.676 LIB libspdk_scheduler_dynamic.a 00:28:55.676 LIB libspdk_accel_ioat.a 00:28:55.676 LIB libspdk_blob_bdev.a 00:28:55.933 LIB libspdk_sock_posix.a 00:28:56.193 CC module/bdev/malloc/bdev_malloc.o 00:28:56.193 CC module/bdev/malloc/bdev_malloc_rpc.o 00:28:56.193 CC module/bdev/lvol/vbdev_lvol.o 00:28:56.193 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:28:56.193 CC module/blobfs/bdev/blobfs_bdev.o 00:28:56.193 CC module/bdev/passthru/vbdev_passthru.o 00:28:56.193 CC module/bdev/delay/vbdev_delay.o 00:28:56.193 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:28:56.193 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:28:56.193 CC module/bdev/split/vbdev_split.o 00:28:56.193 CC module/bdev/raid/bdev_raid_rpc.o 00:28:56.193 CC module/bdev/raid/bdev_raid.o 00:28:56.193 CC module/bdev/delay/vbdev_delay_rpc.o 00:28:56.193 CC module/bdev/split/vbdev_split_rpc.o 00:28:56.193 CC module/bdev/raid/bdev_raid_sb.o 00:28:56.193 CC module/bdev/null/bdev_null.o 00:28:56.193 CC module/bdev/raid/raid0.o 00:28:56.193 CC module/bdev/raid/raid1.o 00:28:56.193 CC module/bdev/raid/concat.o 00:28:56.193 CC module/bdev/null/bdev_null_rpc.o 00:28:56.193 CC module/bdev/gpt/gpt.o 00:28:56.193 CC module/bdev/nvme/bdev_nvme.o 00:28:56.193 CC module/bdev/ftl/bdev_ftl.o 00:28:56.193 CC module/bdev/gpt/vbdev_gpt.o 00:28:56.193 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:56.193 CC module/bdev/error/vbdev_error.o 00:28:56.193 CC module/bdev/nvme/bdev_nvme_rpc.o 00:28:56.193 CC module/bdev/error/vbdev_error_rpc.o 00:28:56.193 CC module/bdev/nvme/nvme_rpc.o 00:28:56.193 CC module/bdev/nvme/bdev_mdns_client.o 00:28:56.193 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:56.193 CC module/bdev/nvme/vbdev_opal.o 00:28:56.193 CC module/bdev/nvme/vbdev_opal_rpc.o 00:28:56.193 CC module/bdev/zone_block/vbdev_zone_block.o 00:28:56.193 CC module/bdev/aio/bdev_aio.o 00:28:56.193 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:56.193 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:56.193 CC module/bdev/aio/bdev_aio_rpc.o 00:28:56.193 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:56.193 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:57.564 LIB libspdk_blobfs_bdev.a 00:28:57.564 LIB libspdk_bdev_null.a 00:28:57.564 LIB libspdk_bdev_gpt.a 00:28:57.564 LIB libspdk_bdev_split.a 00:28:57.564 LIB libspdk_bdev_aio.a 00:28:57.564 LIB libspdk_bdev_error.a 00:28:57.564 LIB libspdk_bdev_delay.a 00:28:57.564 LIB libspdk_bdev_passthru.a 00:28:57.564 LIB libspdk_bdev_ftl.a 00:28:57.564 LIB libspdk_bdev_malloc.a 00:28:57.564 LIB libspdk_bdev_zone_block.a 00:28:57.822 LIB libspdk_bdev_virtio.a 00:28:58.079 LIB libspdk_bdev_lvol.a 00:28:58.645 LIB libspdk_bdev_raid.a 00:29:00.018 LIB libspdk_bdev_nvme.a 00:29:01.917 CC module/event/subsystems/iobuf/iobuf.o 00:29:01.917 CC module/event/subsystems/vmd/vmd.o 00:29:01.917 CC module/event/subsystems/vmd/vmd_rpc.o 00:29:01.917 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:29:01.917 CC module/event/subsystems/sock/sock.o 00:29:01.917 CC module/event/subsystems/keyring/keyring.o 00:29:01.917 CC module/event/subsystems/scheduler/scheduler.o 00:29:01.917 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:29:02.175 LIB libspdk_event_sock.a 00:29:02.175 LIB libspdk_event_keyring.a 00:29:02.175 LIB libspdk_event_vhost_blk.a 00:29:02.175 LIB libspdk_event_vmd.a 00:29:02.175 LIB libspdk_event_scheduler.a 00:29:02.433 LIB libspdk_event_iobuf.a 00:29:02.690 CC module/event/subsystems/accel/accel.o 00:29:02.947 LIB libspdk_event_accel.a 00:29:03.513 CC module/event/subsystems/bdev/bdev.o 00:29:03.771 LIB libspdk_event_bdev.a 00:29:04.336 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:29:04.336 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:29:04.336 CC module/event/subsystems/nbd/nbd.o 00:29:04.336 CC module/event/subsystems/scsi/scsi.o 00:29:04.336 LIB libspdk_event_nbd.a 00:29:04.594 LIB libspdk_event_scsi.a 00:29:04.594 LIB libspdk_event_nvmf.a 00:29:04.851 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:29:04.851 CC module/event/subsystems/iscsi/iscsi.o 00:29:05.114 LIB libspdk_event_vhost_scsi.a 00:29:05.114 LIB libspdk_event_iscsi.a 00:29:05.384 make[1]: Nothing to be done for 'all'. 00:29:05.642 CXX app/trace/trace.o 00:29:05.642 CC app/spdk_lspci/spdk_lspci.o 00:29:05.642 CC app/trace_record/trace_record.o 00:29:05.642 CC app/spdk_nvme_discover/discovery_aer.o 00:29:05.642 CC app/spdk_nvme_identify/identify.o 00:29:05.642 CC app/spdk_nvme_perf/perf.o 00:29:05.642 CC app/iscsi_tgt/iscsi_tgt.o 00:29:05.642 CC app/spdk_top/spdk_top.o 00:29:05.642 CC app/nvmf_tgt/nvmf_main.o 00:29:05.642 CC examples/interrupt_tgt/interrupt_tgt.o 00:29:05.642 CC app/spdk_tgt/spdk_tgt.o 00:29:05.642 CC app/spdk_dd/spdk_dd.o 00:29:05.899 CC examples/util/zipf/zipf.o 00:29:05.899 CC examples/ioat/perf/perf.o 00:29:05.899 CC examples/ioat/verify/verify.o 00:29:06.157 LINK spdk_lspci 00:29:06.157 LINK nvmf_tgt 00:29:06.157 LINK zipf 00:29:06.157 LINK interrupt_tgt 00:29:06.157 LINK iscsi_tgt 00:29:06.157 LINK spdk_nvme_discover 00:29:06.157 LINK ioat_perf 00:29:06.157 LINK spdk_tgt 00:29:06.414 LINK verify 00:29:06.414 LINK spdk_trace_record 00:29:06.672 LINK spdk_trace 00:29:06.672 LINK spdk_dd 00:29:07.603 CC app/vhost/vhost.o 00:29:08.169 LINK vhost 00:29:09.541 LINK spdk_top 00:29:09.541 LINK spdk_nvme_perf 00:29:09.799 LINK spdk_nvme_identify 00:29:27.873 CC examples/vmd/lsvmd/lsvmd.o 00:29:27.873 CC examples/vmd/led/led.o 00:29:27.873 CC examples/sock/hello_world/hello_sock.o 00:29:27.873 CC examples/thread/thread/thread_ex.o 00:29:27.873 LINK lsvmd 00:29:27.873 LINK led 00:29:27.873 LINK hello_sock 00:29:27.873 LINK thread 00:29:34.436 CC examples/nvme/nvme_manage/nvme_manage.o 00:29:34.436 CC examples/nvme/cmb_copy/cmb_copy.o 00:29:34.436 CC examples/nvme/arbitration/arbitration.o 00:29:34.436 CC examples/nvme/hotplug/hotplug.o 00:29:34.436 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:29:34.436 CC examples/nvme/hello_world/hello_world.o 00:29:34.436 CC examples/nvme/abort/abort.o 00:29:34.436 CC examples/nvme/reconnect/reconnect.o 00:29:34.436 LINK abort 00:29:34.436 LINK reconnect 00:29:34.436 LINK arbitration 00:29:34.436 LINK pmr_persistence 00:29:34.436 LINK cmb_copy 00:29:34.694 LINK hotplug 00:29:34.694 LINK hello_world 00:29:34.694 LINK nvme_manage 00:29:39.959 CC examples/accel/perf/accel_perf.o 00:29:39.959 CC examples/blob/hello_world/hello_blob.o 00:29:39.959 CC examples/blob/cli/blobcli.o 00:29:39.959 LINK hello_blob 00:29:39.959 LINK accel_perf 00:29:39.959 LINK blobcli 00:29:46.531 CC examples/bdev/hello_world/hello_bdev.o 00:29:46.531 CC examples/bdev/bdevperf/bdevperf.o 00:29:46.800 LINK hello_bdev 00:29:47.735 LINK bdevperf 00:30:02.624 CC examples/nvmf/nvmf/nvmf.o 00:30:02.882 LINK nvmf 00:30:12.854 make: Leaving directory '/mnt/sdadir/spdk' 00:30:12.854 17:34:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:30:59.610 READ IO cnt: 102 merges: 0 sectors: 3352 ticks: 77 00:30:59.610 WRITE IO cnt: 638088 merges: 625104 sectors: 10888320 ticks: 896507 00:30:59.610 in flight: 0 io ticks: 316808 time in queue: 968291 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 102 0 3352 77 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 638088 625104 10888320 896507 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 316808 968291 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:30:59.610 [2024-07-22 17:35:16.680203] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:30:59.610 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:30:59.611 17:35:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 83133 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 83133 ']' 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 83133 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83133 00:30:59.611 killing process with pid 83133 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83133' 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 83133 00:30:59.611 17:35:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 83133 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:31:02.162 Cleaning up iSCSI connection 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:02.162 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:31:02.162 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:02.162 17:35:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:02.162 00:31:02.162 real 7m31.111s 00:31:02.162 user 12m43.479s 00:31:02.162 sys 2m55.228s 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:31:02.162 ************************************ 00:31:02.162 END TEST iscsi_tgt_ext4test 00:31:02.162 ************************************ 00:31:02.162 17:35:21 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:31:02.162 17:35:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:31:02.162 17:35:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:31:02.162 17:35:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:31:02.162 17:35:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:02.162 17:35:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:02.162 17:35:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:02.162 ************************************ 00:31:02.162 START TEST iscsi_tgt_rbd 00:31:02.162 ************************************ 00:31:02.162 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:31:02.420 * Looking for test storage... 00:31:02.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1005 -- # '[' -z 10.0.0.1 ']' 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1009 -- # '[' -n spdk_iscsi_ns ']' 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # grep spdk_iscsi_ns 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # ip netns list 00:31:02.420 spdk_iscsi_ns (id: 0) 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:31:02.420 17:35:21 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:31:02.420 + base_dir=/var/tmp/ceph 00:31:02.420 + image=/var/tmp/ceph/ceph_raw.img 00:31:02.420 + dev=/dev/loop200 00:31:02.420 + pkill -9 ceph 00:31:02.420 + sleep 3 00:31:05.733 + umount /dev/loop200p2 00:31:05.733 umount: /dev/loop200p2: no mount point specified. 00:31:05.733 + losetup -d /dev/loop200 00:31:05.733 losetup: /dev/loop200: failed to use device: No such device 00:31:05.733 + rm -rf /var/tmp/ceph 00:31:05.733 17:35:24 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:31:05.733 + set -e 00:31:05.733 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:31:05.733 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:31:05.733 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:31:05.733 + base_dir=/var/tmp/ceph 00:31:05.733 + mon_ip=10.0.0.1 00:31:05.733 + mon_dir=/var/tmp/ceph/mon.a 00:31:05.733 + pid_dir=/var/tmp/ceph/pid 00:31:05.733 + ceph_conf=/var/tmp/ceph/ceph.conf 00:31:05.733 + mnt_dir=/var/tmp/ceph/mnt 00:31:05.733 + image=/var/tmp/ceph_raw.img 00:31:05.733 + dev=/dev/loop200 00:31:05.733 + modprobe loop 00:31:05.733 + umount /dev/loop200p2 00:31:05.733 umount: /dev/loop200p2: no mount point specified. 00:31:05.733 + true 00:31:05.733 + losetup -d /dev/loop200 00:31:05.733 losetup: /dev/loop200: failed to use device: No such device 00:31:05.733 + true 00:31:05.733 + '[' -d /var/tmp/ceph ']' 00:31:05.733 + mkdir /var/tmp/ceph 00:31:05.733 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:31:05.733 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:31:05.733 + fallocate -l 4G /var/tmp/ceph_raw.img 00:31:05.733 + mknod /dev/loop200 b 7 200 00:31:05.733 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:31:05.733 Partitioning /dev/loop200 00:31:05.733 + PARTED='parted -s' 00:31:05.733 + SGDISK=sgdisk 00:31:05.733 + echo 'Partitioning /dev/loop200' 00:31:05.733 + parted -s /dev/loop200 mktable gpt 00:31:05.733 + sleep 2 00:31:07.646 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:31:07.646 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:31:07.646 + partno=0 00:31:07.646 Setting name on /dev/loop200 00:31:07.646 + echo 'Setting name on /dev/loop200' 00:31:07.646 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:31:08.581 Warning: The kernel is still using the old partition table. 00:31:08.581 The new table will be used at the next reboot or after you 00:31:08.581 run partprobe(8) or kpartx(8) 00:31:08.581 The operation has completed successfully. 00:31:08.581 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:31:09.515 Warning: The kernel is still using the old partition table. 00:31:09.515 The new table will be used at the next reboot or after you 00:31:09.515 run partprobe(8) or kpartx(8) 00:31:09.515 The operation has completed successfully. 00:31:09.515 + kpartx /dev/loop200 00:31:09.515 loop200p1 : 0 4192256 /dev/loop200 2048 00:31:09.515 loop200p2 : 0 4192256 /dev/loop200 4194304 00:31:09.515 ++ ceph -v 00:31:09.515 ++ awk '{print $3}' 00:31:09.773 + ceph_version=17.2.7 00:31:09.773 + ceph_maj=17 00:31:09.773 + '[' 17 -gt 12 ']' 00:31:09.773 + update_config=true 00:31:09.773 + rm -f /var/log/ceph/ceph-mon.a.log 00:31:09.773 + set_min_mon_release='--set-min-mon-release 14' 00:31:09.773 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:31:09.773 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:31:09.773 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:31:09.774 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:31:09.774 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:31:09.774 = sectsz=512 attr=2, projid32bit=1 00:31:09.774 = crc=1 finobt=1, sparse=1, rmapbt=0 00:31:09.774 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:31:09.774 data = bsize=4096 blocks=524032, imaxpct=25 00:31:09.774 = sunit=0 swidth=0 blks 00:31:09.774 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:31:09.774 log =internal log bsize=4096 blocks=16384, version=2 00:31:09.774 = sectsz=512 sunit=0 blks, lazy-count=1 00:31:09.774 realtime =none extsz=4096 blocks=0, rtextents=0 00:31:09.774 Discarding blocks...Done. 00:31:09.774 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:31:09.774 + cat 00:31:09.774 + rm -rf '/var/tmp/ceph/mon.a/*' 00:31:09.774 + mkdir -p /var/tmp/ceph/mon.a 00:31:09.774 + mkdir -p /var/tmp/ceph/pid 00:31:09.774 + rm -f /etc/ceph/ceph.client.admin.keyring 00:31:09.774 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:31:09.774 creating /var/tmp/ceph/keyring 00:31:09.774 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:31:09.774 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:31:09.774 monmaptool: monmap file /var/tmp/ceph/monmap 00:31:09.774 monmaptool: generated fsid 8674391f-6b09-469f-87b0-b7e691535170 00:31:09.774 setting min_mon_release = octopus 00:31:09.774 epoch 0 00:31:09.774 fsid 8674391f-6b09-469f-87b0-b7e691535170 00:31:09.774 last_changed 2024-07-22T17:35:28.684522+0000 00:31:09.774 created 2024-07-22T17:35:28.684522+0000 00:31:09.774 min_mon_release 15 (octopus) 00:31:09.774 election_strategy: 1 00:31:09.774 0: v2:10.0.0.1:12046/0 mon.a 00:31:09.774 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:31:09.774 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:31:10.032 + '[' true = true ']' 00:31:10.032 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:31:10.032 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:31:10.032 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:31:10.032 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:31:10.032 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:31:10.032 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:31:10.032 ++ hostname 00:31:10.032 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:31:10.032 + true 00:31:10.032 + '[' true = true ']' 00:31:10.032 + ceph-conf --name mon.a --show-config-value log_file 00:31:10.032 /var/log/ceph/ceph-mon.a.log 00:31:10.032 ++ ceph -s 00:31:10.032 ++ grep id 00:31:10.032 ++ awk '{print $2}' 00:31:10.290 + fsid=8674391f-6b09-469f-87b0-b7e691535170 00:31:10.290 + sed -i 's/perf = true/perf = true\n\tfsid = 8674391f-6b09-469f-87b0-b7e691535170 \n/g' /var/tmp/ceph/ceph.conf 00:31:10.290 + (( ceph_maj < 18 )) 00:31:10.290 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:31:10.290 + cat /var/tmp/ceph/ceph.conf 00:31:10.290 [global] 00:31:10.290 debug_lockdep = 0/0 00:31:10.291 debug_context = 0/0 00:31:10.291 debug_crush = 0/0 00:31:10.291 debug_buffer = 0/0 00:31:10.291 debug_timer = 0/0 00:31:10.291 debug_filer = 0/0 00:31:10.291 debug_objecter = 0/0 00:31:10.291 debug_rados = 0/0 00:31:10.291 debug_rbd = 0/0 00:31:10.291 debug_ms = 0/0 00:31:10.291 debug_monc = 0/0 00:31:10.291 debug_tp = 0/0 00:31:10.291 debug_auth = 0/0 00:31:10.291 debug_finisher = 0/0 00:31:10.291 debug_heartbeatmap = 0/0 00:31:10.291 debug_perfcounter = 0/0 00:31:10.291 debug_asok = 0/0 00:31:10.291 debug_throttle = 0/0 00:31:10.291 debug_mon = 0/0 00:31:10.291 debug_paxos = 0/0 00:31:10.291 debug_rgw = 0/0 00:31:10.291 00:31:10.291 perf = true 00:31:10.291 osd objectstore = filestore 00:31:10.291 00:31:10.291 fsid = 8674391f-6b09-469f-87b0-b7e691535170 00:31:10.291 00:31:10.291 mutex_perf_counter = false 00:31:10.291 throttler_perf_counter = false 00:31:10.291 rbd cache = false 00:31:10.291 mon_allow_pool_delete = true 00:31:10.291 00:31:10.291 osd_pool_default_size = 1 00:31:10.291 00:31:10.291 [mon] 00:31:10.291 mon_max_pool_pg_num=166496 00:31:10.291 mon_osd_max_split_count = 10000 00:31:10.291 mon_pg_warn_max_per_osd = 10000 00:31:10.291 00:31:10.291 [osd] 00:31:10.291 osd_op_threads = 64 00:31:10.291 filestore_queue_max_ops=5000 00:31:10.291 filestore_queue_committing_max_ops=5000 00:31:10.291 journal_max_write_entries=1000 00:31:10.291 journal_queue_max_ops=3000 00:31:10.291 objecter_inflight_ops=102400 00:31:10.291 filestore_wbthrottle_enable=false 00:31:10.291 filestore_queue_max_bytes=1048576000 00:31:10.291 filestore_queue_committing_max_bytes=1048576000 00:31:10.291 journal_max_write_bytes=1048576000 00:31:10.291 journal_queue_max_bytes=1048576000 00:31:10.291 ms_dispatch_throttle_bytes=1048576000 00:31:10.291 objecter_inflight_op_bytes=1048576000 00:31:10.291 filestore_max_sync_interval=10 00:31:10.291 osd_client_message_size_cap = 0 00:31:10.291 osd_client_message_cap = 0 00:31:10.291 osd_enable_op_tracker = false 00:31:10.291 filestore_fd_cache_size = 10240 00:31:10.291 filestore_fd_cache_shards = 64 00:31:10.291 filestore_op_threads = 16 00:31:10.291 osd_op_num_shards = 48 00:31:10.291 osd_op_num_threads_per_shard = 2 00:31:10.291 osd_pg_object_context_cache_count = 10240 00:31:10.291 filestore_odsync_write = True 00:31:10.291 journal_dynamic_throttle = True 00:31:10.291 00:31:10.291 [osd.0] 00:31:10.291 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:31:10.291 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:31:10.291 00:31:10.291 # add mon address 00:31:10.291 [mon.a] 00:31:10.291 mon addr = v2:10.0.0.1:12046 00:31:10.291 + i=0 00:31:10.291 + mkdir -p /var/tmp/ceph/mnt 00:31:10.552 ++ uuidgen 00:31:10.552 + uuid=de67c27d-4420-473a-81f8-9758b604a565 00:31:10.552 + ceph -c /var/tmp/ceph/ceph.conf osd create de67c27d-4420-473a-81f8-9758b604a565 0 00:31:10.811 0 00:31:10.811 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid de67c27d-4420-473a-81f8-9758b604a565 --check-needs-journal --no-mon-config 00:31:10.811 2024-07-22T17:35:29.644+0000 7fe9ad85c400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:31:10.811 2024-07-22T17:35:29.644+0000 7fe9ad85c400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:31:10.811 2024-07-22T17:35:29.688+0000 7fe9ad85c400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected de67c27d-4420-473a-81f8-9758b604a565, invalid (someone else's?) journal 00:31:10.811 2024-07-22T17:35:29.726+0000 7fe9ad85c400 -1 journal do_read_entry(4096): bad header magic 00:31:10.811 2024-07-22T17:35:29.726+0000 7fe9ad85c400 -1 journal do_read_entry(4096): bad header magic 00:31:11.069 ++ hostname 00:31:11.069 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:31:12.442 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:31:12.442 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:31:12.442 added key for osd.0 00:31:12.442 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:31:12.700 + class_dir=/lib64/rados-classes 00:31:12.700 + [[ -e /lib64/rados-classes ]] 00:31:12.700 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:31:12.958 + pkill -9 ceph-osd 00:31:12.958 + true 00:31:12.958 + sleep 2 00:31:15.486 + mkdir -p /var/tmp/ceph/pid 00:31:15.486 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:31:15.486 2024-07-22T17:35:33.921+0000 7fdd02c51400 -1 Falling back to public interface 00:31:15.486 2024-07-22T17:35:33.962+0000 7fdd02c51400 -1 journal do_read_entry(8192): bad header magic 00:31:15.486 2024-07-22T17:35:33.962+0000 7fdd02c51400 -1 journal do_read_entry(8192): bad header magic 00:31:15.486 2024-07-22T17:35:33.995+0000 7fdd02c51400 -1 osd.0 0 log_to_monitors true 00:31:16.052 17:35:34 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:31:17.427 pool 'rbd' created 00:31:17.427 17:35:36 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1026 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=123664 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 123664 00:31:22.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@829 -- # '[' -z 123664 ']' 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.695 17:35:41 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:22.695 [2024-07-22 17:35:41.473615] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:22.695 [2024-07-22 17:35:41.473830] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123664 ] 00:31:22.953 [2024-07-22 17:35:41.650271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.211 [2024-07-22 17:35:41.907982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.211 [2024-07-22 17:35:41.908052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.211 [2024-07-22 17:35:41.908189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.211 [2024-07-22 17:35:41.908197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@862 -- # return 0 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.469 17:35:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:31:24.404 iscsi_tgt is listening. Running tests... 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:31:24.404 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 { 00:31:24.405 "cluster_name": "iscsi_rbd_cluster", 00:31:24.405 "config_file": "/etc/ceph/ceph.conf", 00:31:24.405 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:31:24.405 } 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 [2024-07-22 17:35:43.287036] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 [ 00:31:24.405 { 00:31:24.405 "name": "Ceph0", 00:31:24.405 "aliases": [ 00:31:24.405 "927f56e8-917e-4eba-80e2-160037cdbb11" 00:31:24.405 ], 00:31:24.405 "product_name": "Ceph Rbd Disk", 00:31:24.405 "block_size": 4096, 00:31:24.405 "num_blocks": 256000, 00:31:24.405 "uuid": "927f56e8-917e-4eba-80e2-160037cdbb11", 00:31:24.405 "assigned_rate_limits": { 00:31:24.405 "rw_ios_per_sec": 0, 00:31:24.405 "rw_mbytes_per_sec": 0, 00:31:24.405 "r_mbytes_per_sec": 0, 00:31:24.405 "w_mbytes_per_sec": 0 00:31:24.405 }, 00:31:24.405 "claimed": false, 00:31:24.405 "zoned": false, 00:31:24.405 "supported_io_types": { 00:31:24.405 "read": true, 00:31:24.405 "write": true, 00:31:24.405 "unmap": true, 00:31:24.405 "flush": true, 00:31:24.405 "reset": true, 00:31:24.405 "nvme_admin": false, 00:31:24.405 "nvme_io": false, 00:31:24.405 "nvme_io_md": false, 00:31:24.405 "write_zeroes": true, 00:31:24.405 "zcopy": false, 00:31:24.405 "get_zone_info": false, 00:31:24.405 "zone_management": false, 00:31:24.405 "zone_append": false, 00:31:24.405 "compare": false, 00:31:24.405 "compare_and_write": true, 00:31:24.405 "abort": false, 00:31:24.405 "seek_hole": false, 00:31:24.405 "seek_data": false, 00:31:24.405 "copy": false, 00:31:24.405 "nvme_iov_md": false 00:31:24.405 }, 00:31:24.405 "driver_specific": { 00:31:24.405 "rbd": { 00:31:24.405 "pool_name": "rbd", 00:31:24.405 "rbd_name": "foo", 00:31:24.405 "config_file": "/etc/ceph/ceph.conf", 00:31:24.405 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:31:24.405 } 00:31:24.405 } 00:31:24.405 } 00:31:24.405 ] 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.405 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 true 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.663 17:35:43 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:31:25.598 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:31:25.598 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:31:25.598 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:31:25.598 [2024-07-22 17:35:44.444879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:25.598 17:35:44 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:31:25.598 [global] 00:31:25.598 thread=1 00:31:25.598 invalidate=1 00:31:25.598 rw=randrw 00:31:25.598 time_based=1 00:31:25.598 runtime=1 00:31:25.598 ioengine=libaio 00:31:25.598 direct=1 00:31:25.598 bs=4096 00:31:25.598 iodepth=1 00:31:25.598 norandommap=0 00:31:25.598 numjobs=1 00:31:25.598 00:31:25.598 verify_dump=1 00:31:25.598 verify_backlog=512 00:31:25.598 verify_state_save=0 00:31:25.598 do_verify=1 00:31:25.598 verify=crc32c-intel 00:31:25.598 [job0] 00:31:25.598 filename=/dev/sda 00:31:25.598 queue_depth set to 113 (sda) 00:31:25.857 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:25.857 fio-3.35 00:31:25.857 Starting 1 thread 00:31:25.857 [2024-07-22 17:35:44.608215] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:26.815 [2024-07-22 17:35:45.724710] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:27.074 00:31:27.074 job0: (groupid=0, jobs=1): err= 0: pid=123784: Mon Jul 22 17:35:45 2024 00:31:27.074 read: IOPS=66, BW=266KiB/s (272kB/s)(268KiB/1009msec) 00:31:27.074 slat (nsec): min=11386, max=99519, avg=34943.76, stdev=14100.26 00:31:27.074 clat (usec): min=163, max=6424, avg=485.81, stdev=786.44 00:31:27.074 lat (usec): min=181, max=6459, avg=520.75, stdev=788.33 00:31:27.074 clat percentiles (usec): 00:31:27.074 | 1.00th=[ 163], 5.00th=[ 208], 10.00th=[ 225], 20.00th=[ 258], 00:31:27.074 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 343], 00:31:27.074 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 537], 95.00th=[ 1205], 00:31:27.074 | 99.00th=[ 6456], 99.50th=[ 6456], 99.90th=[ 6456], 99.95th=[ 6456], 00:31:27.074 | 99.99th=[ 6456] 00:31:27.074 bw ( KiB/s): min= 232, max= 304, per=100.00%, avg=268.00, stdev=50.91, samples=2 00:31:27.074 iops : min= 58, max= 76, avg=67.00, stdev=12.73, samples=2 00:31:27.074 write: IOPS=69, BW=278KiB/s (284kB/s)(280KiB/1009msec); 0 zone resets 00:31:27.074 slat (nsec): min=15814, max=93895, avg=37783.29, stdev=12573.29 00:31:27.074 clat (usec): min=4156, max=23936, avg=13850.47, stdev=3689.56 00:31:27.074 lat (usec): min=4197, max=23997, avg=13888.25, stdev=3690.81 00:31:27.074 clat percentiles (usec): 00:31:27.074 | 1.00th=[ 4146], 5.00th=[ 6587], 10.00th=[ 8848], 20.00th=[10814], 00:31:27.074 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14091], 60.00th=[15008], 00:31:27.074 | 70.00th=[15664], 80.00th=[16319], 90.00th=[17695], 95.00th=[18744], 00:31:27.074 | 99.00th=[23987], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:31:27.074 | 99.99th=[23987] 00:31:27.074 bw ( KiB/s): min= 256, max= 296, per=99.46%, avg=276.00, stdev=28.28, samples=2 00:31:27.074 iops : min= 64, max= 74, avg=69.00, stdev= 7.07, samples=2 00:31:27.074 lat (usec) : 250=8.03%, 500=35.77%, 750=1.46% 00:31:27.074 lat (msec) : 2=2.92%, 10=8.03%, 20=42.34%, 50=1.46% 00:31:27.074 cpu : usr=0.10%, sys=0.60%, ctx=137, majf=0, minf=1 00:31:27.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.074 issued rwts: total=67,70,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.074 00:31:27.074 Run status group 0 (all jobs): 00:31:27.074 READ: bw=266KiB/s (272kB/s), 266KiB/s-266KiB/s (272kB/s-272kB/s), io=268KiB (274kB), run=1009-1009msec 00:31:27.074 WRITE: bw=278KiB/s (284kB/s), 278KiB/s-278KiB/s (284kB/s-284kB/s), io=280KiB (287kB), run=1009-1009msec 00:31:27.074 00:31:27.074 Disk stats (read/write): 00:31:27.074 sda: ios=105/61, merge=0/0, ticks=39/853, in_queue=893, util=91.11% 00:31:27.074 17:35:45 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:31:27.074 [global] 00:31:27.074 thread=1 00:31:27.074 invalidate=1 00:31:27.074 rw=randrw 00:31:27.074 time_based=1 00:31:27.074 runtime=1 00:31:27.074 ioengine=libaio 00:31:27.074 direct=1 00:31:27.074 bs=131072 00:31:27.074 iodepth=32 00:31:27.074 norandommap=0 00:31:27.074 numjobs=1 00:31:27.074 00:31:27.074 verify_dump=1 00:31:27.074 verify_backlog=512 00:31:27.074 verify_state_save=0 00:31:27.074 do_verify=1 00:31:27.074 verify=crc32c-intel 00:31:27.074 [job0] 00:31:27.074 filename=/dev/sda 00:31:27.074 queue_depth set to 113 (sda) 00:31:27.074 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:27.074 fio-3.35 00:31:27.074 Starting 1 thread 00:31:27.074 [2024-07-22 17:35:45.921271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:29.015 [2024-07-22 17:35:47.556458] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:29.015 00:31:29.015 job0: (groupid=0, jobs=1): err= 0: pid=123836: Mon Jul 22 17:35:47 2024 00:31:29.015 read: IOPS=93, BW=11.7MiB/s (12.2MB/s)(17.8MiB/1520msec) 00:31:29.015 slat (usec): min=6, max=480, avg=28.97, stdev=40.05 00:31:29.015 clat (usec): min=5, max=66550, avg=2023.38, stdev=5679.55 00:31:29.015 lat (usec): min=349, max=66565, avg=2052.35, stdev=5676.88 00:31:29.015 clat percentiles (usec): 00:31:29.015 | 1.00th=[ 330], 5.00th=[ 379], 10.00th=[ 412], 20.00th=[ 461], 00:31:29.015 | 30.00th=[ 570], 40.00th=[ 906], 50.00th=[ 1090], 60.00th=[ 1401], 00:31:29.015 | 70.00th=[ 1663], 80.00th=[ 1876], 90.00th=[ 3490], 95.00th=[ 6063], 00:31:29.015 | 99.00th=[ 7570], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:31:29.015 | 99.99th=[66323] 00:31:29.016 bw ( KiB/s): min= 9728, max=26624, per=100.00%, avg=18176.00, stdev=11947.28, samples=2 00:31:29.016 iops : min= 76, max= 208, avg=142.00, stdev=93.34, samples=2 00:31:29.016 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(17.1MiB/1520msec); 0 zone resets 00:31:29.016 slat (usec): min=33, max=241, avg=81.34, stdev=26.10 00:31:29.016 clat (msec): min=16, max=1091, avg=348.88, stdev=302.98 00:31:29.016 lat (msec): min=16, max=1092, avg=348.97, stdev=302.99 00:31:29.016 clat percentiles (msec): 00:31:29.016 | 1.00th=[ 21], 5.00th=[ 38], 10.00th=[ 91], 20.00th=[ 118], 00:31:29.016 | 30.00th=[ 124], 40.00th=[ 138], 50.00th=[ 150], 60.00th=[ 338], 00:31:29.016 | 70.00th=[ 527], 80.00th=[ 617], 90.00th=[ 852], 95.00th=[ 978], 00:31:29.016 | 99.00th=[ 1083], 99.50th=[ 1099], 99.90th=[ 1099], 99.95th=[ 1099], 00:31:29.016 | 99.99th=[ 1099] 00:31:29.016 bw ( KiB/s): min= 256, max=19968, per=78.40%, avg=9045.33, stdev=10027.67, samples=3 00:31:29.016 iops : min= 2, max= 156, avg=70.67, stdev=78.34, samples=3 00:31:29.016 lat (usec) : 10=0.36%, 500=12.19%, 750=5.73%, 1000=3.94% 00:31:29.016 lat (msec) : 2=19.71%, 4=4.30%, 10=4.30%, 20=0.36%, 50=2.51% 00:31:29.016 lat (msec) : 100=3.23%, 250=21.51%, 500=6.45%, 750=8.60%, 1000=5.02% 00:31:29.016 lat (msec) : 2000=1.79% 00:31:29.016 cpu : usr=0.72%, sys=0.39%, ctx=252, majf=0, minf=1 00:31:29.016 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=88.9%, >=64=0.0% 00:31:29.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.016 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.4%, 64=0.0%, >=64=0.0% 00:31:29.016 issued rwts: total=142,137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.016 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:29.016 00:31:29.016 Run status group 0 (all jobs): 00:31:29.016 READ: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=17.8MiB (18.6MB), run=1520-1520msec 00:31:29.016 WRITE: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=17.1MiB (18.0MB), run=1520-1520msec 00:31:29.016 00:31:29.016 Disk stats (read/write): 00:31:29.016 sda: ios=190/126, merge=0/0, ticks=278/34563, in_queue=34840, util=93.68% 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:29.016 Cleaning up iSCSI connection 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:29.016 Logging out of session [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:31:29.016 Logout of [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # rm -rf 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:29.016 [2024-07-22 17:35:47.669271] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 123664 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@948 -- # '[' -z 123664 ']' 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@952 -- # kill -0 123664 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # uname 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123664 00:31:29.016 killing process with pid 123664 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123664' 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@967 -- # kill 123664 00:31:29.016 17:35:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@972 -- # wait 123664 00:31:31.546 17:35:50 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:31:31.546 17:35:50 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:31:31.546 17:35:50 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:31:31.546 + base_dir=/var/tmp/ceph 00:31:31.546 + image=/var/tmp/ceph/ceph_raw.img 00:31:31.546 + dev=/dev/loop200 00:31:31.546 + pkill -9 ceph 00:31:31.546 + sleep 3 00:31:34.850 + umount /dev/loop200p2 00:31:34.850 umount: /dev/loop200p2: not mounted. 00:31:34.850 + losetup -d /dev/loop200 00:31:34.850 + rm -rf /var/tmp/ceph 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:34.850 00:31:34.850 real 0m32.030s 00:31:34.850 user 0m33.114s 00:31:34.850 sys 0m2.060s 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:34.850 ************************************ 00:31:34.850 END TEST iscsi_tgt_rbd 00:31:34.850 ************************************ 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:34.850 17:35:53 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:31:34.850 17:35:53 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:31:34.850 17:35:53 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:31:34.850 17:35:53 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:31:34.850 17:35:53 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:34.850 17:35:53 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:34.850 17:35:53 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:34.850 ************************************ 00:31:34.850 START TEST iscsi_tgt_initiator 00:31:34.850 ************************************ 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:31:34.850 * Looking for test storage... 00:31:34.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:34.850 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=123981 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 123981' 00:31:34.851 iSCSI target launched. pid: 123981 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 123981 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@829 -- # '[' -z 123981 ']' 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:34.851 17:35:53 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:34.851 [2024-07-22 17:35:53.402244] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:34.851 [2024-07-22 17:35:53.402453] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123981 ] 00:31:34.851 [2024-07-22 17:35:53.726324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.108 [2024-07-22 17:35:53.968930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@862 -- # return 0 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.365 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.297 iscsi_tgt is listening. Running tests... 00:31:36.297 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:31:36.297 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:31:36.297 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:36.297 17:35:54 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 Malloc0 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.297 17:35:55 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:31:37.231 17:35:56 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.231 17:35:56 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:31:37.231 17:35:56 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:31:37.231 17:35:56 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:31:37.488 [2024-07-22 17:35:56.272659] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:37.488 [2024-07-22 17:35:56.272878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124045 ] 00:31:37.745 [2024-07-22 17:35:56.588840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.003 [2024-07-22 17:35:56.911932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.260 Running I/O for 5 seconds... 00:31:43.523 00:31:43.523 Latency(us) 00:31:43.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.523 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:43.523 Verification LBA range: start 0x0 length 0x4000 00:31:43.523 iSCSI0 : 5.01 12003.05 46.89 0.00 0.00 10624.78 1779.90 12809.31 00:31:43.523 =================================================================================================================== 00:31:43.523 Total : 12003.05 46.89 0.00 0.00 10624.78 1779.90 12809.31 00:31:44.900 17:36:03 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:31:44.900 17:36:03 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:31:44.900 17:36:03 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:31:44.900 [2024-07-22 17:36:03.801324] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:44.900 [2024-07-22 17:36:03.801541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124137 ] 00:31:45.468 [2024-07-22 17:36:04.115953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.468 [2024-07-22 17:36:04.350621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.726 Running I/O for 5 seconds... 00:31:50.994 00:31:50.994 Latency(us) 00:31:50.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.994 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:31:50.994 iSCSI0 : 5.00 23729.14 92.69 0.00 0.00 5388.15 1131.99 10247.45 00:31:50.994 =================================================================================================================== 00:31:50.994 Total : 23729.14 92.69 0.00 0.00 5388.15 1131.99 10247.45 00:31:52.407 17:36:11 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:31:52.407 17:36:11 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:31:52.407 17:36:11 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:31:52.407 [2024-07-22 17:36:11.199808] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:52.407 [2024-07-22 17:36:11.200061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124212 ] 00:31:52.665 [2024-07-22 17:36:11.513624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.923 [2024-07-22 17:36:11.777373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.182 Running I/O for 5 seconds... 00:31:58.449 00:31:58.449 Latency(us) 00:31:58.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.449 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:31:58.449 iSCSI0 : 5.00 42591.97 166.37 0.00 0.00 3001.46 953.25 4706.68 00:31:58.449 =================================================================================================================== 00:31:58.449 Total : 42591.97 166.37 0.00 0.00 3001.46 953.25 4706.68 00:31:59.826 17:36:18 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:31:59.826 17:36:18 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:31:59.826 17:36:18 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:31:59.826 [2024-07-22 17:36:18.651579] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:59.826 [2024-07-22 17:36:18.651807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124290 ] 00:32:00.097 [2024-07-22 17:36:18.963426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.375 [2024-07-22 17:36:19.224579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.633 Running I/O for 10 seconds... 00:32:10.605 00:32:10.605 Latency(us) 00:32:10.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.605 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:32:10.605 Verification LBA range: start 0x0 length 0x4000 00:32:10.605 iSCSI0 : 10.01 11859.18 46.32 0.00 0.00 10756.43 1951.19 13762.56 00:32:10.605 =================================================================================================================== 00:32:10.605 Total : 11859.18 46.32 0.00 0.00 10756.43 1951.19 13762.56 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 123981 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@948 -- # '[' -z 123981 ']' 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@952 -- # kill -0 123981 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # uname 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123981 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:12.505 killing process with pid 123981 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123981' 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@967 -- # kill 123981 00:32:12.505 17:36:31 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@972 -- # wait 123981 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:15.041 00:32:15.041 real 0m40.469s 00:32:15.041 user 1m0.441s 00:32:15.041 sys 0m11.285s 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:32:15.041 ************************************ 00:32:15.041 END TEST iscsi_tgt_initiator 00:32:15.041 ************************************ 00:32:15.041 17:36:33 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:15.041 17:36:33 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:32:15.041 17:36:33 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:15.041 17:36:33 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.041 17:36:33 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:15.041 ************************************ 00:32:15.041 START TEST iscsi_tgt_bdev_io_wait 00:32:15.041 ************************************ 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:32:15.041 * Looking for test storage... 00:32:15.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=124487 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 124487' 00:32:15.041 iSCSI target launched. pid: 124487 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 124487 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 124487 ']' 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.041 17:36:33 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.041 [2024-07-22 17:36:33.937787] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:15.042 [2024-07-22 17:36:33.938021] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124487 ] 00:32:15.613 [2024-07-22 17:36:34.273082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.613 [2024-07-22 17:36:34.559526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.177 17:36:34 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.740 iscsi_tgt is listening. Running tests... 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.740 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.996 Malloc0 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.996 17:36:35 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:32:17.929 17:36:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.929 17:36:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:32:17.929 17:36:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:32:17.929 17:36:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:32:17.929 [2024-07-22 17:36:36.851065] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:17.929 [2024-07-22 17:36:36.851257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124531 ] 00:32:18.187 [2024-07-22 17:36:37.017051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.446 [2024-07-22 17:36:37.307096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.013 Running I/O for 1 seconds... 00:32:19.951 00:32:19.951 Latency(us) 00:32:19.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.951 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:19.951 iSCSI0 : 1.01 20282.49 79.23 0.00 0.00 6290.55 1757.56 7596.22 00:32:19.951 =================================================================================================================== 00:32:19.951 Total : 20282.49 79.23 0.00 0.00 6290.55 1757.56 7596.22 00:32:21.326 17:36:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:32:21.326 17:36:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:32:21.326 17:36:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:32:21.326 [2024-07-22 17:36:40.065877] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:21.326 [2024-07-22 17:36:40.066117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124571 ] 00:32:21.326 [2024-07-22 17:36:40.241186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.586 [2024-07-22 17:36:40.491575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.154 Running I/O for 1 seconds... 00:32:23.089 00:32:23.089 Latency(us) 00:32:23.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.089 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:32:23.089 iSCSI0 : 1.00 25617.67 100.07 0.00 0.00 4981.40 1035.17 5749.29 00:32:23.089 =================================================================================================================== 00:32:23.089 Total : 25617.67 100.07 0.00 0.00 4981.40 1035.17 5749.29 00:32:24.469 17:36:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:32:24.469 17:36:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:32:24.469 17:36:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:32:24.469 [2024-07-22 17:36:43.247864] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:24.469 [2024-07-22 17:36:43.248114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124604 ] 00:32:24.730 [2024-07-22 17:36:43.423673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.988 [2024-07-22 17:36:43.719211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.246 Running I/O for 1 seconds... 00:32:26.179 00:32:26.179 Latency(us) 00:32:26.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.179 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:32:26.179 iSCSI0 : 1.00 30042.07 117.35 0.00 0.00 4250.17 1131.99 5749.29 00:32:26.179 =================================================================================================================== 00:32:26.179 Total : 30042.07 117.35 0.00 0.00 4250.17 1131.99 5749.29 00:32:27.555 17:36:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:32:27.555 17:36:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:32:27.555 17:36:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:32:27.555 [2024-07-22 17:36:46.467166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:27.555 [2024-07-22 17:36:46.467378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124638 ] 00:32:27.814 [2024-07-22 17:36:46.640379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.072 [2024-07-22 17:36:46.901782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.331 Running I/O for 1 seconds... 00:32:29.315 00:32:29.315 Latency(us) 00:32:29.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.315 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:32:29.315 iSCSI0 : 1.01 18126.34 70.81 0.00 0.00 7039.49 1213.91 8638.84 00:32:29.315 =================================================================================================================== 00:32:29.315 Total : 18126.34 70.81 0.00 0.00 7039.49 1213.91 8638.84 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 124487 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 124487 ']' 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 124487 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124487 00:32:30.691 killing process with pid 124487 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124487' 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 124487 00:32:30.691 17:36:49 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 124487 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:33.236 00:32:33.236 real 0m18.103s 00:32:33.236 user 0m26.918s 00:32:33.236 sys 0m3.502s 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:33.236 ************************************ 00:32:33.236 END TEST iscsi_tgt_bdev_io_wait 00:32:33.236 ************************************ 00:32:33.236 17:36:51 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:33.236 17:36:51 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:32:33.236 17:36:51 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:33.236 17:36:51 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.236 17:36:51 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:33.236 ************************************ 00:32:33.236 START TEST iscsi_tgt_resize 00:32:33.236 ************************************ 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:32:33.236 * Looking for test storage... 00:32:33.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=124741 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:32:33.236 iSCSI target launched. pid: 124741 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 124741' 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 124741 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 124741 ']' 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.236 17:36:51 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:33.236 [2024-07-22 17:36:52.101047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:33.237 [2024-07-22 17:36:52.101323] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124741 ] 00:32:33.494 [2024-07-22 17:36:52.432327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.060 [2024-07-22 17:36:52.770606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.317 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.317 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:32:34.317 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:32:34.317 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.317 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.250 iscsi_tgt is listening. Running tests... 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 Null0 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.250 17:36:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=124790 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 124790 /var/tmp/spdk-resize.sock 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 124790 ']' 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.183 17:36:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:36.183 [2024-07-22 17:36:55.107777] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:36.183 [2024-07-22 17:36:55.107965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124790 ] 00:32:36.441 [2024-07-22 17:36:55.306535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.700 [2024-07-22 17:36:55.535499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:37.267 [2024-07-22 17:36:55.960439] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:32:37.267 true 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:32:37.267 17:36:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.267 17:36:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:32:37.267 17:36:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:32:37.267 17:36:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:32:37.267 17:36:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:32:39.210 17:36:58 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:32:39.210 Running I/O for 5 seconds... 00:32:44.478 00:32:44.478 Latency(us) 00:32:44.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.478 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:32:44.478 iSCSI0 : 5.00 31037.51 121.24 0.00 0.00 511.89 269.96 1079.85 00:32:44.478 =================================================================================================================== 00:32:44.478 Total : 31037.51 121.24 0.00 0.00 511.89 269.96 1079.85 00:32:44.478 0 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 124790 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 124790 ']' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 124790 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124790 00:32:44.478 killing process with pid 124790 00:32:44.478 Received shutdown signal, test time was about 5.000000 seconds 00:32:44.478 00:32:44.478 Latency(us) 00:32:44.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.478 =================================================================================================================== 00:32:44.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124790' 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 124790 00:32:44.478 17:37:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 124790 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 124741 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 124741 ']' 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 124741 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124741 00:32:45.879 killing process with pid 124741 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124741' 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 124741 00:32:45.879 17:37:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 124741 00:32:48.411 17:37:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:32:48.411 17:37:06 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:48.411 00:32:48.411 real 0m15.087s 00:32:48.411 user 0m21.754s 00:32:48.411 sys 0m3.130s 00:32:48.411 ************************************ 00:32:48.411 END TEST iscsi_tgt_resize 00:32:48.411 ************************************ 00:32:48.411 17:37:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.411 17:37:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:32:48.411 17:37:06 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:48.411 17:37:06 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:32:48.411 17:37:06 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:32:48.411 17:37:06 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:32:48.411 17:37:06 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:32:48.411 17:37:06 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:32:48.411 17:37:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:32:48.411 ************************************ 00:32:48.411 END TEST iscsi_tgt 00:32:48.411 ************************************ 00:32:48.411 00:32:48.411 real 24m9.334s 00:32:48.411 user 43m9.276s 00:32:48.411 sys 7m18.321s 00:32:48.412 17:37:07 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.412 17:37:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:48.412 17:37:07 -- common/autotest_common.sh@1142 -- # return 0 00:32:48.412 17:37:07 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:32:48.412 17:37:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:48.412 17:37:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.412 17:37:07 -- common/autotest_common.sh@10 -- # set +x 00:32:48.412 ************************************ 00:32:48.412 START TEST spdkcli_iscsi 00:32:48.412 ************************************ 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:32:48.412 * Looking for test storage... 00:32:48.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:48.412 17:37:07 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:48.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=125026 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 125026 00:32:48.412 17:37:07 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 125026 ']' 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.412 17:37:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:48.670 [2024-07-22 17:37:07.384710] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:48.670 [2024-07-22 17:37:07.384899] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125026 ] 00:32:48.670 [2024-07-22 17:37:07.554062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:48.929 [2024-07-22 17:37:07.862597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.929 [2024-07-22 17:37:07.862612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.495 17:37:08 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.495 17:37:08 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:32:49.495 17:37:08 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:32:50.428 17:37:09 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:32:50.428 17:37:09 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:50.428 17:37:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:50.428 17:37:09 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:32:50.428 17:37:09 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:50.428 17:37:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:50.428 17:37:09 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:32:50.428 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:50.428 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:50.428 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:50.428 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:32:50.428 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:32:50.428 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:32:50.428 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:32:50.428 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:32:50.428 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:32:50.428 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:32:50.428 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:32:50.428 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:32:50.428 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:32:50.428 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:32:50.428 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:32:50.428 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:32:50.428 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:32:50.428 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:32:50.428 ' 00:32:58.540 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:32:58.540 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:58.540 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:58.540 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:58.540 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:32:58.540 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:32:58.540 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:32:58.540 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:32:58.540 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:32:58.540 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:32:58.540 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:32:58.540 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:32:58.540 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:32:58.540 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:32:58.540 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:32:58.540 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:32:58.540 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:32:58.540 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:32:58.540 Executing command: ['/iscsi ls', 'Malloc', True] 00:32:58.540 17:37:16 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:32:58.540 17:37:16 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.540 17:37:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:58.540 17:37:16 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:32:58.540 17:37:16 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:58.540 17:37:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:58.540 17:37:16 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:32:58.540 17:37:16 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:32:58.540 17:37:17 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:32:58.540 17:37:17 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:32:58.540 17:37:17 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:32:58.540 17:37:17 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.540 17:37:17 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:58.540 17:37:17 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:32:58.540 17:37:17 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:58.540 17:37:17 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:58.540 17:37:17 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:32:58.540 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:32:58.540 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:32:58.540 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:32:58.540 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:32:58.540 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:32:58.540 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:32:58.540 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:32:58.540 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:32:58.540 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:32:58.540 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:32:58.540 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:32:58.540 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:58.541 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:58.541 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:58.541 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:32:58.541 ' 00:33:06.648 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:33:06.648 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:33:06.648 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:33:06.648 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:33:06.648 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:33:06.648 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:33:06.648 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:33:06.648 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:33:06.648 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:33:06.648 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:33:06.648 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:33:06.648 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:33:06.648 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:06.648 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:06.648 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:06.648 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:33:06.648 17:37:24 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:33:06.648 17:37:24 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 125026 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 125026 ']' 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 125026 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125026 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:06.648 killing process with pid 125026 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125026' 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 125026 00:33:06.648 17:37:24 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 125026 00:33:08.021 Process with pid 125026 is not found 00:33:08.021 17:37:26 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:33:08.021 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:08.021 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:08.022 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 125026 ']' 00:33:08.022 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 125026 00:33:08.022 17:37:26 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 125026 ']' 00:33:08.022 17:37:26 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 125026 00:33:08.022 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125026) - No such process 00:33:08.022 17:37:26 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 125026 is not found' 00:33:08.022 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:08.022 17:37:26 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:08.022 ************************************ 00:33:08.022 END TEST spdkcli_iscsi 00:33:08.022 ************************************ 00:33:08.022 00:33:08.022 real 0m19.399s 00:33:08.022 user 0m40.128s 00:33:08.022 sys 0m1.267s 00:33:08.022 17:37:26 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.022 17:37:26 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:33:08.022 17:37:26 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.022 17:37:26 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:08.022 17:37:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:08.022 17:37:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.022 17:37:26 -- common/autotest_common.sh@10 -- # set +x 00:33:08.022 ************************************ 00:33:08.022 START TEST spdkcli_raid 00:33:08.022 ************************************ 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:08.022 * Looking for test storage... 00:33:08.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:08.022 17:37:26 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:08.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=125348 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 125348 00:33:08.022 17:37:26 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 125348 ']' 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.022 17:37:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:08.022 [2024-07-22 17:37:26.847964] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:08.022 [2024-07-22 17:37:26.848153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125348 ] 00:33:08.281 [2024-07-22 17:37:27.016202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:08.540 [2024-07-22 17:37:27.305831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.540 [2024-07-22 17:37:27.305842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:33:09.473 17:37:28 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:09.473 17:37:28 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:09.473 17:37:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:09.473 17:37:28 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:09.473 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:09.473 ' 00:33:10.848 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:33:10.848 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:33:10.848 17:37:29 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:33:10.848 17:37:29 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:10.848 17:37:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:10.848 17:37:29 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:33:10.848 17:37:29 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:10.848 17:37:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:11.116 17:37:29 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:33:11.116 ' 00:33:12.050 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:33:12.050 17:37:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:33:12.050 17:37:30 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.050 17:37:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.050 17:37:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:33:12.050 17:37:30 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:12.050 17:37:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.050 17:37:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:33:12.050 17:37:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:33:12.615 17:37:31 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:33:12.615 17:37:31 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:33:12.615 17:37:31 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:33:12.615 17:37:31 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.615 17:37:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.615 17:37:31 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:33:12.615 17:37:31 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:12.615 17:37:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.615 17:37:31 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:33:12.615 ' 00:33:13.988 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:33:13.988 17:37:32 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:33:13.988 17:37:32 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:13.988 17:37:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:13.988 17:37:32 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:33:13.988 17:37:32 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:13.988 17:37:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:13.988 17:37:32 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:33:13.988 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:33:13.988 ' 00:33:15.370 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:33:15.370 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:33:15.370 17:37:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:15.370 17:37:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 125348 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125348 ']' 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125348 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125348 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125348' 00:33:15.370 killing process with pid 125348 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 125348 00:33:15.370 17:37:34 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 125348 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 125348 ']' 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 125348 00:33:17.899 17:37:36 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125348 ']' 00:33:17.899 Process with pid 125348 is not found 00:33:17.899 17:37:36 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125348 00:33:17.899 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125348) - No such process 00:33:17.899 17:37:36 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 125348 is not found' 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:17.899 17:37:36 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:17.899 ************************************ 00:33:17.899 END TEST spdkcli_raid 00:33:17.899 ************************************ 00:33:17.899 00:33:17.899 real 0m9.826s 00:33:17.899 user 0m19.873s 00:33:17.899 sys 0m1.026s 00:33:17.899 17:37:36 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:17.899 17:37:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:17.899 17:37:36 -- common/autotest_common.sh@1142 -- # return 0 00:33:17.899 17:37:36 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@330 -- # '[' 1 -eq 1 ']' 00:33:17.899 17:37:36 -- spdk/autotest.sh@331 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:33:17.899 17:37:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:17.899 17:37:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:17.899 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:33:17.899 ************************************ 00:33:17.899 START TEST blockdev_rbd 00:33:17.899 ************************************ 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:33:17.899 * Looking for test storage... 00:33:17.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:17.899 17:37:36 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=125605 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 125605 00:33:17.899 17:37:36 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@829 -- # '[' -z 125605 ']' 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:17.899 17:37:36 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:17.899 [2024-07-22 17:37:36.709457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:17.899 [2024-07-22 17:37:36.709867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125605 ] 00:33:18.157 [2024-07-22 17:37:36.873635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.416 [2024-07-22 17:37:37.123487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.982 17:37:37 blockdev_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:18.982 17:37:37 blockdev_rbd -- common/autotest_common.sh@862 -- # return 0 00:33:18.982 17:37:37 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:33:18.982 17:37:37 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:33:18.982 17:37:37 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:33:18.982 17:37:37 blockdev_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:18.982 17:37:37 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:19.241 17:37:37 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:33:19.241 17:37:37 blockdev_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:19.241 + base_dir=/var/tmp/ceph 00:33:19.241 + image=/var/tmp/ceph/ceph_raw.img 00:33:19.241 + dev=/dev/loop200 00:33:19.241 + pkill -9 ceph 00:33:19.241 + sleep 3 00:33:22.605 + umount /dev/loop200p2 00:33:22.605 umount: /dev/loop200p2: no mount point specified. 00:33:22.605 + losetup -d /dev/loop200 00:33:22.605 losetup: /dev/loop200: detach failed: No such device or address 00:33:22.605 + rm -rf /var/tmp/ceph 00:33:22.606 17:37:40 blockdev_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:33:22.606 + set -e 00:33:22.606 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:33:22.606 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:33:22.606 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:33:22.606 + base_dir=/var/tmp/ceph 00:33:22.606 + mon_ip=127.0.0.1 00:33:22.606 + mon_dir=/var/tmp/ceph/mon.a 00:33:22.606 + pid_dir=/var/tmp/ceph/pid 00:33:22.606 + ceph_conf=/var/tmp/ceph/ceph.conf 00:33:22.606 + mnt_dir=/var/tmp/ceph/mnt 00:33:22.606 + image=/var/tmp/ceph_raw.img 00:33:22.606 + dev=/dev/loop200 00:33:22.606 + modprobe loop 00:33:22.606 + umount /dev/loop200p2 00:33:22.606 umount: /dev/loop200p2: no mount point specified. 00:33:22.606 + true 00:33:22.606 + losetup -d /dev/loop200 00:33:22.606 losetup: /dev/loop200: detach failed: No such device or address 00:33:22.606 + true 00:33:22.606 + '[' -d /var/tmp/ceph ']' 00:33:22.606 + mkdir /var/tmp/ceph 00:33:22.606 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:33:22.606 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:33:22.606 + fallocate -l 4G /var/tmp/ceph_raw.img 00:33:22.606 + mknod /dev/loop200 b 7 200 00:33:22.606 mknod: /dev/loop200: File exists 00:33:22.606 + true 00:33:22.606 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:33:22.606 Partitioning /dev/loop200 00:33:22.606 + PARTED='parted -s' 00:33:22.606 + SGDISK=sgdisk 00:33:22.606 + echo 'Partitioning /dev/loop200' 00:33:22.606 + parted -s /dev/loop200 mktable gpt 00:33:22.606 + sleep 2 00:33:24.509 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:33:24.509 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:33:24.509 Setting name on /dev/loop200 00:33:24.509 + partno=0 00:33:24.509 + echo 'Setting name on /dev/loop200' 00:33:24.509 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:33:25.471 Warning: The kernel is still using the old partition table. 00:33:25.471 The new table will be used at the next reboot or after you 00:33:25.471 run partprobe(8) or kpartx(8) 00:33:25.471 The operation has completed successfully. 00:33:25.471 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:33:26.404 Warning: The kernel is still using the old partition table. 00:33:26.404 The new table will be used at the next reboot or after you 00:33:26.404 run partprobe(8) or kpartx(8) 00:33:26.404 The operation has completed successfully. 00:33:26.404 + kpartx /dev/loop200 00:33:26.404 loop200p1 : 0 4192256 /dev/loop200 2048 00:33:26.404 loop200p2 : 0 4192256 /dev/loop200 4194304 00:33:26.404 ++ ceph -v 00:33:26.404 ++ awk '{print $3}' 00:33:26.404 + ceph_version=17.2.7 00:33:26.404 + ceph_maj=17 00:33:26.404 + '[' 17 -gt 12 ']' 00:33:26.404 + update_config=true 00:33:26.404 + rm -f /var/log/ceph/ceph-mon.a.log 00:33:26.404 + set_min_mon_release='--set-min-mon-release 14' 00:33:26.404 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:33:26.404 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:33:26.404 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:33:26.404 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:33:26.404 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:33:26.404 = sectsz=512 attr=2, projid32bit=1 00:33:26.404 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:26.404 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:26.404 data = bsize=4096 blocks=524032, imaxpct=25 00:33:26.404 = sunit=0 swidth=0 blks 00:33:26.404 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:26.404 log =internal log bsize=4096 blocks=16384, version=2 00:33:26.404 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:26.404 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:26.404 Discarding blocks...Done. 00:33:26.404 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:33:26.404 + cat 00:33:26.404 + rm -rf '/var/tmp/ceph/mon.a/*' 00:33:26.404 + mkdir -p /var/tmp/ceph/mon.a 00:33:26.404 + mkdir -p /var/tmp/ceph/pid 00:33:26.404 + rm -f /etc/ceph/ceph.client.admin.keyring 00:33:26.404 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:33:26.404 creating /var/tmp/ceph/keyring 00:33:26.661 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:33:26.661 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:33:26.661 monmaptool: monmap file /var/tmp/ceph/monmap 00:33:26.661 monmaptool: generated fsid 11164f60-c303-474c-88c8-0d998a9224e0 00:33:26.661 setting min_mon_release = octopus 00:33:26.661 epoch 0 00:33:26.661 fsid 11164f60-c303-474c-88c8-0d998a9224e0 00:33:26.661 last_changed 2024-07-22T17:37:45.434226+0000 00:33:26.661 created 2024-07-22T17:37:45.434226+0000 00:33:26.661 min_mon_release 15 (octopus) 00:33:26.661 election_strategy: 1 00:33:26.661 0: v2:127.0.0.1:12046/0 mon.a 00:33:26.661 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:33:26.661 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:33:26.661 + '[' true = true ']' 00:33:26.661 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:33:26.661 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:33:26.661 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:33:26.661 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:33:26.661 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:33:26.661 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:33:26.661 ++ hostname 00:33:26.661 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:33:26.661 + true 00:33:26.661 + '[' true = true ']' 00:33:26.661 + ceph-conf --name mon.a --show-config-value log_file 00:33:26.919 /var/log/ceph/ceph-mon.a.log 00:33:26.919 ++ ceph -s 00:33:26.919 ++ grep id 00:33:26.919 ++ awk '{print $2}' 00:33:26.919 + fsid=11164f60-c303-474c-88c8-0d998a9224e0 00:33:26.919 + sed -i 's/perf = true/perf = true\n\tfsid = 11164f60-c303-474c-88c8-0d998a9224e0 \n/g' /var/tmp/ceph/ceph.conf 00:33:26.919 + (( ceph_maj < 18 )) 00:33:26.919 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:33:26.919 + cat /var/tmp/ceph/ceph.conf 00:33:26.919 [global] 00:33:26.919 debug_lockdep = 0/0 00:33:26.919 debug_context = 0/0 00:33:26.919 debug_crush = 0/0 00:33:26.919 debug_buffer = 0/0 00:33:26.919 debug_timer = 0/0 00:33:26.919 debug_filer = 0/0 00:33:26.919 debug_objecter = 0/0 00:33:26.919 debug_rados = 0/0 00:33:26.919 debug_rbd = 0/0 00:33:26.919 debug_ms = 0/0 00:33:26.919 debug_monc = 0/0 00:33:26.919 debug_tp = 0/0 00:33:26.919 debug_auth = 0/0 00:33:26.919 debug_finisher = 0/0 00:33:26.919 debug_heartbeatmap = 0/0 00:33:26.919 debug_perfcounter = 0/0 00:33:26.919 debug_asok = 0/0 00:33:26.919 debug_throttle = 0/0 00:33:26.919 debug_mon = 0/0 00:33:26.919 debug_paxos = 0/0 00:33:26.919 debug_rgw = 0/0 00:33:26.919 00:33:26.919 perf = true 00:33:26.919 osd objectstore = filestore 00:33:26.919 00:33:26.919 fsid = 11164f60-c303-474c-88c8-0d998a9224e0 00:33:26.919 00:33:26.919 mutex_perf_counter = false 00:33:26.919 throttler_perf_counter = false 00:33:26.919 rbd cache = false 00:33:26.919 mon_allow_pool_delete = true 00:33:26.919 00:33:26.919 osd_pool_default_size = 1 00:33:26.919 00:33:26.919 [mon] 00:33:26.919 mon_max_pool_pg_num=166496 00:33:26.919 mon_osd_max_split_count = 10000 00:33:26.919 mon_pg_warn_max_per_osd = 10000 00:33:26.919 00:33:26.919 [osd] 00:33:26.919 osd_op_threads = 64 00:33:26.919 filestore_queue_max_ops=5000 00:33:26.919 filestore_queue_committing_max_ops=5000 00:33:26.919 journal_max_write_entries=1000 00:33:26.919 journal_queue_max_ops=3000 00:33:26.919 objecter_inflight_ops=102400 00:33:26.919 filestore_wbthrottle_enable=false 00:33:26.919 filestore_queue_max_bytes=1048576000 00:33:26.920 filestore_queue_committing_max_bytes=1048576000 00:33:26.920 journal_max_write_bytes=1048576000 00:33:26.920 journal_queue_max_bytes=1048576000 00:33:26.920 ms_dispatch_throttle_bytes=1048576000 00:33:26.920 objecter_inflight_op_bytes=1048576000 00:33:26.920 filestore_max_sync_interval=10 00:33:26.920 osd_client_message_size_cap = 0 00:33:26.920 osd_client_message_cap = 0 00:33:26.920 osd_enable_op_tracker = false 00:33:26.920 filestore_fd_cache_size = 10240 00:33:26.920 filestore_fd_cache_shards = 64 00:33:26.920 filestore_op_threads = 16 00:33:26.920 osd_op_num_shards = 48 00:33:26.920 osd_op_num_threads_per_shard = 2 00:33:26.920 osd_pg_object_context_cache_count = 10240 00:33:26.920 filestore_odsync_write = True 00:33:26.920 journal_dynamic_throttle = True 00:33:26.920 00:33:26.920 [osd.0] 00:33:26.920 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:33:26.920 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:33:26.920 00:33:26.920 # add mon address 00:33:26.920 [mon.a] 00:33:26.920 mon addr = v2:127.0.0.1:12046 00:33:26.920 + i=0 00:33:26.920 + mkdir -p /var/tmp/ceph/mnt 00:33:26.920 ++ uuidgen 00:33:26.920 + uuid=380e46d1-cd9b-4d0b-9dfc-807360c22f35 00:33:26.920 + ceph -c /var/tmp/ceph/ceph.conf osd create 380e46d1-cd9b-4d0b-9dfc-807360c22f35 0 00:33:27.483 0 00:33:27.483 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 380e46d1-cd9b-4d0b-9dfc-807360c22f35 --check-needs-journal --no-mon-config 00:33:27.483 2024-07-22T17:37:46.234+0000 7f495f284400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:33:27.483 2024-07-22T17:37:46.235+0000 7f495f284400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:33:27.483 2024-07-22T17:37:46.284+0000 7f495f284400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 380e46d1-cd9b-4d0b-9dfc-807360c22f35, invalid (someone else's?) journal 00:33:27.483 2024-07-22T17:37:46.315+0000 7f495f284400 -1 journal do_read_entry(4096): bad header magic 00:33:27.483 2024-07-22T17:37:46.315+0000 7f495f284400 -1 journal do_read_entry(4096): bad header magic 00:33:27.483 ++ hostname 00:33:27.483 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:33:28.852 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:33:28.852 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:33:29.110 added key for osd.0 00:33:29.110 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:33:29.366 + class_dir=/lib64/rados-classes 00:33:29.366 + [[ -e /lib64/rados-classes ]] 00:33:29.366 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:33:29.624 + pkill -9 ceph-osd 00:33:29.624 + true 00:33:29.624 + sleep 2 00:33:31.523 + mkdir -p /var/tmp/ceph/pid 00:33:31.523 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:33:31.782 2024-07-22T17:37:50.499+0000 7f0e5745c400 -1 Falling back to public interface 00:33:31.782 2024-07-22T17:37:50.546+0000 7f0e5745c400 -1 journal do_read_entry(8192): bad header magic 00:33:31.782 2024-07-22T17:37:50.547+0000 7f0e5745c400 -1 journal do_read_entry(8192): bad header magic 00:33:31.782 2024-07-22T17:37:50.556+0000 7f0e5745c400 -1 osd.0 0 log_to_monitors true 00:33:31.782 17:37:50 blockdev_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:33:32.717 pool 'rbd' created 00:33:32.975 17:37:51 blockdev_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:33:38.244 17:37:56 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:33:38.244 17:37:56 blockdev_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:38.244 17:37:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 [2024-07-22 17:37:57.110718] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:38.244 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:33:38.244 Ceph0 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.244 17:37:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:38.244 17:37:57 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "6637ef4b-d533-41b3-ac4f-1a203a5469d1"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "6637ef4b-d533-41b3-ac4f-1a203a5469d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:33:38.513 17:37:57 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 125605 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@948 -- # '[' -z 125605 ']' 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@952 -- # kill -0 125605 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@953 -- # uname 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125605 00:33:38.513 killing process with pid 125605 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125605' 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@967 -- # kill 125605 00:33:38.513 17:37:57 blockdev_rbd -- common/autotest_common.sh@972 -- # wait 125605 00:33:41.058 17:37:59 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:41.058 17:37:59 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:33:41.058 17:37:59 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:41.058 17:37:59 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:41.058 17:37:59 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:41.058 ************************************ 00:33:41.058 START TEST bdev_hello_world 00:33:41.058 ************************************ 00:33:41.058 17:37:59 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:33:41.058 [2024-07-22 17:37:59.870417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:41.058 [2024-07-22 17:37:59.871356] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126488 ] 00:33:41.315 [2024-07-22 17:38:00.052463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.573 [2024-07-22 17:38:00.311087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.830 [2024-07-22 17:38:00.761339] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:41.830 [2024-07-22 17:38:00.775062] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:41.830 [2024-07-22 17:38:00.775122] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:33:41.830 [2024-07-22 17:38:00.775149] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:41.830 [2024-07-22 17:38:00.777532] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:42.088 [2024-07-22 17:38:00.796285] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:42.088 [2024-07-22 17:38:00.796334] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:42.088 [2024-07-22 17:38:00.801105] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:42.088 00:33:42.088 [2024-07-22 17:38:00.801157] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:43.463 00:33:43.463 real 0m2.296s 00:33:43.463 user 0m1.841s 00:33:43.463 sys 0m0.331s 00:33:43.463 ************************************ 00:33:43.463 END TEST bdev_hello_world 00:33:43.463 ************************************ 00:33:43.463 17:38:02 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.463 17:38:02 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:43.463 17:38:02 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:33:43.463 17:38:02 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:33:43.463 17:38:02 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:43.463 17:38:02 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.463 17:38:02 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:43.463 ************************************ 00:33:43.463 START TEST bdev_bounds 00:33:43.463 ************************************ 00:33:43.463 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:33:43.463 Process bdevio pid: 126545 00:33:43.463 17:38:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=126545 00:33:43.463 17:38:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:43.463 17:38:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 126545' 00:33:43.463 17:38:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 126545 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 126545 ']' 00:33:43.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.464 17:38:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:43.464 [2024-07-22 17:38:02.239733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:43.464 [2024-07-22 17:38:02.239963] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126545 ] 00:33:43.722 [2024-07-22 17:38:02.415141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:43.980 [2024-07-22 17:38:02.675485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.980 [2024-07-22 17:38:02.675620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.980 [2024-07-22 17:38:02.675639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.239 [2024-07-22 17:38:03.125763] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:44.239 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.239 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:33:44.239 17:38:03 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:44.497 I/O targets: 00:33:44.497 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:33:44.497 00:33:44.497 00:33:44.497 CUnit - A unit testing framework for C - Version 2.1-3 00:33:44.497 http://cunit.sourceforge.net/ 00:33:44.497 00:33:44.497 00:33:44.497 Suite: bdevio tests on: Ceph0 00:33:44.497 Test: blockdev write read block ...passed 00:33:44.497 Test: blockdev write zeroes read block ...passed 00:33:44.497 Test: blockdev write zeroes read no split ...passed 00:33:44.497 Test: blockdev write zeroes read split ...passed 00:33:44.497 Test: blockdev write zeroes read split partial ...passed 00:33:44.497 Test: blockdev reset ...passed 00:33:44.497 Test: blockdev write read 8 blocks ...passed 00:33:44.497 Test: blockdev write read size > 128k ...passed 00:33:44.497 Test: blockdev write read invalid size ...passed 00:33:44.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:44.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:44.497 Test: blockdev write read max offset ...passed 00:33:44.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:44.497 Test: blockdev writev readv 8 blocks ...passed 00:33:44.497 Test: blockdev writev readv 30 x 1block ...passed 00:33:44.497 Test: blockdev writev readv block ...passed 00:33:44.497 Test: blockdev writev readv size > 128k ...passed 00:33:44.755 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:44.755 Test: blockdev comparev and writev ...passed 00:33:44.755 Test: blockdev nvme passthru rw ...passed 00:33:44.755 Test: blockdev nvme passthru vendor specific ...passed 00:33:44.755 Test: blockdev nvme admin passthru ...passed 00:33:44.755 Test: blockdev copy ...passed 00:33:44.755 00:33:44.755 Run Summary: Type Total Ran Passed Failed Inactive 00:33:44.755 suites 1 1 n/a 0 0 00:33:44.755 tests 23 23 23 0 0 00:33:44.755 asserts 130 130 130 0 n/a 00:33:44.755 00:33:44.755 Elapsed time = 0.484 seconds 00:33:44.755 0 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 126545 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 126545 ']' 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 126545 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126545 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126545' 00:33:44.755 killing process with pid 126545 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@967 -- # kill 126545 00:33:44.755 17:38:03 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@972 -- # wait 126545 00:33:46.128 17:38:04 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:33:46.128 00:33:46.128 real 0m2.668s 00:33:46.128 user 0m5.804s 00:33:46.128 sys 0m0.481s 00:33:46.128 17:38:04 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:46.128 17:38:04 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:46.128 ************************************ 00:33:46.128 END TEST bdev_bounds 00:33:46.128 ************************************ 00:33:46.128 17:38:04 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:33:46.128 17:38:04 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:33:46.128 17:38:04 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:46.128 17:38:04 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:46.128 17:38:04 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:46.128 ************************************ 00:33:46.128 START TEST bdev_nbd 00:33:46.128 ************************************ 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:33:46.128 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=126624 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 126624 /var/tmp/spdk-nbd.sock 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 126624 ']' 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:46.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.129 17:38:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:46.129 [2024-07-22 17:38:04.938551] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:46.129 [2024-07-22 17:38:04.939030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.386 [2024-07-22 17:38:05.108472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.686 [2024-07-22 17:38:05.397752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.944 [2024-07-22 17:38:05.850106] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:46.944 17:38:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:47.510 1+0 records in 00:33:47.510 1+0 records out 00:33:47.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112135 s, 3.7 MB/s 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:47.510 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:47.769 { 00:33:47.769 "nbd_device": "/dev/nbd0", 00:33:47.769 "bdev_name": "Ceph0" 00:33:47.769 } 00:33:47.769 ]' 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:47.769 { 00:33:47.769 "nbd_device": "/dev/nbd0", 00:33:47.769 "bdev_name": "Ceph0" 00:33:47.769 } 00:33:47.769 ]' 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:47.769 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.028 17:38:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:48.286 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:33:48.545 /dev/nbd0 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:48.545 1+0 records in 00:33:48.545 1+0 records out 00:33:48.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149524 s, 2.7 MB/s 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.545 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:48.804 { 00:33:48.804 "nbd_device": "/dev/nbd0", 00:33:48.804 "bdev_name": "Ceph0" 00:33:48.804 } 00:33:48.804 ]' 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:48.804 { 00:33:48.804 "nbd_device": "/dev/nbd0", 00:33:48.804 "bdev_name": "Ceph0" 00:33:48.804 } 00:33:48.804 ]' 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:48.804 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:48.805 256+0 records in 00:33:48.805 256+0 records out 00:33:48.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483608 s, 217 MB/s 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:48.805 17:38:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:50.203 256+0 records in 00:33:50.203 256+0 records out 00:33:50.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.27441 s, 823 kB/s 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:50.203 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:50.461 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:50.462 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:50.462 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:50.462 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.462 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:50.720 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:50.978 malloc_lvol_verify 00:33:51.236 17:38:09 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:51.494 603586d9-4c3b-4673-b9ba-a50a430f456e 00:33:51.494 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:51.494 f2ee13a6-8f35-4c1d-9b38-7e9e89370173 00:33:51.494 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:52.060 /dev/nbd0 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:52.060 mke2fs 1.46.5 (30-Dec-2021) 00:33:52.060 Discarding device blocks: 0/4096 done 00:33:52.060 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:52.060 00:33:52.060 Allocating group tables: 0/1 done 00:33:52.060 Writing inode tables: 0/1 done 00:33:52.060 Creating journal (1024 blocks): done 00:33:52.060 Writing superblocks and filesystem accounting information: 0/1 done 00:33:52.060 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:52.060 17:38:10 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 126624 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 126624 ']' 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 126624 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126624 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:52.319 killing process with pid 126624 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126624' 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@967 -- # kill 126624 00:33:52.319 17:38:11 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@972 -- # wait 126624 00:33:53.694 17:38:12 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:33:53.694 00:33:53.694 real 0m7.629s 00:33:53.694 user 0m10.054s 00:33:53.694 sys 0m1.857s 00:33:53.694 17:38:12 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:53.694 17:38:12 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:53.694 ************************************ 00:33:53.694 END TEST bdev_nbd 00:33:53.694 ************************************ 00:33:53.694 17:38:12 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:33:53.694 17:38:12 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:33:53.694 17:38:12 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:33:53.694 17:38:12 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:33:53.694 17:38:12 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:33:53.694 17:38:12 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:53.694 17:38:12 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.694 17:38:12 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:53.694 ************************************ 00:33:53.694 START TEST bdev_fio 00:33:53.694 ************************************ 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:53.694 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:53.694 ************************************ 00:33:53.694 START TEST bdev_fio_rw_verify 00:33:53.694 ************************************ 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:53.694 17:38:12 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:53.953 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:53.953 fio-3.35 00:33:53.953 Starting 1 thread 00:34:06.157 00:34:06.158 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=126874: Mon Jul 22 17:38:23 2024 00:34:06.158 read: IOPS=409, BW=1637KiB/s (1677kB/s)(16.0MiB/10006msec) 00:34:06.158 slat (usec): min=5, max=330, avg=18.60, stdev=19.02 00:34:06.158 clat (usec): min=496, max=337722, avg=3841.80, stdev=22716.89 00:34:06.158 lat (usec): min=513, max=337732, avg=3860.40, stdev=22716.95 00:34:06.158 clat percentiles (usec): 00:34:06.158 | 50.000th=[ 1418], 99.000th=[ 67634], 99.900th=[329253], 00:34:06.158 | 99.990th=[337642], 99.999th=[337642] 00:34:06.158 write: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10006msec); 0 zone resets 00:34:06.158 slat (usec): min=19, max=2040, avg=46.80, stdev=44.14 00:34:06.158 clat (msec): min=2, max=805, avg=13.51, stdev=42.67 00:34:06.158 lat (msec): min=2, max=805, avg=13.55, stdev=42.67 00:34:06.158 clat percentiles (msec): 00:34:06.158 | 50.000th=[ 6], 99.000th=[ 124], 99.900th=[ 760], 99.990th=[ 810], 00:34:06.158 | 99.999th=[ 810] 00:34:06.158 bw ( KiB/s): min= 112, max= 6288, per=99.93%, avg=1890.70, stdev=1909.64, samples=20 00:34:06.158 iops : min= 28, max= 1572, avg=472.60, stdev=477.38, samples=20 00:34:06.158 lat (usec) : 500=0.01%, 750=0.59%, 1000=1.95% 00:34:06.158 lat (msec) : 2=39.90%, 4=10.62%, 10=40.15%, 20=1.91%, 50=0.96% 00:34:06.158 lat (msec) : 100=2.35%, 250=1.21%, 500=0.26%, 1000=0.09% 00:34:06.158 cpu : usr=97.76%, sys=1.03%, ctx=427, majf=0, minf=12546 00:34:06.158 IO depths : 1=0.1%, 2=0.1%, 4=10.2%, 8=89.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.158 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.158 issued rwts: total=4096,4731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.158 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:06.158 00:34:06.158 Run status group 0 (all jobs): 00:34:06.158 READ: bw=1637KiB/s (1677kB/s), 1637KiB/s-1637KiB/s (1677kB/s-1677kB/s), io=16.0MiB (16.8MB), run=10006-10006msec 00:34:06.158 WRITE: bw=1891KiB/s (1937kB/s), 1891KiB/s-1891KiB/s (1937kB/s-1937kB/s), io=18.5MiB (19.4MB), run=10006-10006msec 00:34:06.726 ----------------------------------------------------- 00:34:06.726 Suppressions used: 00:34:06.726 count bytes template 00:34:06.726 1 6 /usr/src/fio/parse.c 00:34:06.726 629 60384 /usr/src/fio/iolog.c 00:34:06.726 1 8 libtcmalloc_minimal.so 00:34:06.726 1 904 libcrypto.so 00:34:06.726 ----------------------------------------------------- 00:34:06.726 00:34:06.726 00:34:06.726 real 0m12.929s 00:34:06.726 user 0m13.312s 00:34:06.726 sys 0m1.764s 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:06.726 ************************************ 00:34:06.726 END TEST bdev_fio_rw_verify 00:34:06.726 ************************************ 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:34:06.726 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "6637ef4b-d533-41b3-ac4f-1a203a5469d1"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "6637ef4b-d533-41b3-ac4f-1a203a5469d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "6637ef4b-d533-41b3-ac4f-1a203a5469d1"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "6637ef4b-d533-41b3-ac4f-1a203a5469d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:34:06.727 17:38:25 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:06.985 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:34:06.985 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.985 17:38:25 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:06.985 ************************************ 00:34:06.985 START TEST bdev_fio_trim 00:34:06.985 ************************************ 00:34:06.985 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:06.986 17:38:25 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:07.244 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:07.244 fio-3.35 00:34:07.244 Starting 1 thread 00:34:19.465 00:34:19.465 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=127064: Mon Jul 22 17:38:37 2024 00:34:19.465 write: IOPS=717, BW=2870KiB/s (2939kB/s)(28.0MiB/10002msec); 0 zone resets 00:34:19.465 slat (usec): min=8, max=979, avg=35.80, stdev=40.72 00:34:19.465 clat (msec): min=2, max=1032, avg=10.93, stdev=32.95 00:34:19.465 lat (msec): min=2, max=1033, avg=10.97, stdev=32.95 00:34:19.465 clat percentiles (msec): 00:34:19.465 | 50.000th=[ 10], 99.000th=[ 19], 99.900th=[ 936], 99.990th=[ 1036], 00:34:19.465 | 99.999th=[ 1036] 00:34:19.465 bw ( KiB/s): min= 56, max= 4032, per=98.46%, avg=2826.53, stdev=1090.04, samples=19 00:34:19.465 iops : min= 14, max= 1008, avg=706.63, stdev=272.51, samples=19 00:34:19.465 trim: IOPS=717, BW=2870KiB/s (2939kB/s)(28.0MiB/10002msec); 0 zone resets 00:34:19.465 slat (usec): min=5, max=400, avg=17.16, stdev=20.29 00:34:19.465 clat (usec): min=5, max=10984, avg=153.88, stdev=272.36 00:34:19.465 lat (usec): min=22, max=11008, avg=171.04, stdev=273.30 00:34:19.465 clat percentiles (usec): 00:34:19.465 | 50.000th=[ 127], 99.000th=[ 449], 99.900th=[ 766], 99.990th=[10945], 00:34:19.465 | 99.999th=[10945] 00:34:19.466 bw ( KiB/s): min= 56, max= 4032, per=98.56%, avg=2829.89, stdev=1093.23, samples=19 00:34:19.466 iops : min= 14, max= 1008, avg=707.47, stdev=273.31, samples=19 00:34:19.466 lat (usec) : 10=0.06%, 20=0.77%, 50=6.97%, 100=12.30%, 250=22.08% 00:34:19.466 lat (usec) : 500=7.55%, 750=0.22%, 1000=0.01% 00:34:19.466 lat (msec) : 2=0.01%, 4=1.35%, 10=27.46%, 20=20.84%, 50=0.16% 00:34:19.466 lat (msec) : 100=0.06%, 250=0.11%, 1000=0.05%, 2000=0.01% 00:34:19.466 cpu : usr=97.07%, sys=1.59%, ctx=632, majf=0, minf=19120 00:34:19.466 IO depths : 1=0.1%, 2=0.1%, 4=15.6%, 8=84.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.466 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.466 issued rwts: total=0,7177,7177,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.466 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:19.466 00:34:19.466 Run status group 0 (all jobs): 00:34:19.466 WRITE: bw=2870KiB/s (2939kB/s), 2870KiB/s-2870KiB/s (2939kB/s-2939kB/s), io=28.0MiB (29.4MB), run=10002-10002msec 00:34:19.466 TRIM: bw=2870KiB/s (2939kB/s), 2870KiB/s-2870KiB/s (2939kB/s-2939kB/s), io=28.0MiB (29.4MB), run=10002-10002msec 00:34:19.725 ----------------------------------------------------- 00:34:19.725 Suppressions used: 00:34:19.725 count bytes template 00:34:19.725 1 6 /usr/src/fio/parse.c 00:34:19.725 1 8 libtcmalloc_minimal.so 00:34:19.725 1 904 libcrypto.so 00:34:19.725 ----------------------------------------------------- 00:34:19.725 00:34:19.725 00:34:19.725 real 0m12.869s 00:34:19.725 user 0m13.051s 00:34:19.725 sys 0m1.311s 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:34:19.725 ************************************ 00:34:19.725 END TEST bdev_fio_trim 00:34:19.725 ************************************ 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:34:19.725 /home/vagrant/spdk_repo/spdk 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:34:19.725 00:34:19.725 real 0m26.115s 00:34:19.725 user 0m26.522s 00:34:19.725 sys 0m3.216s 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:19.725 17:38:38 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:19.725 ************************************ 00:34:19.725 END TEST bdev_fio 00:34:19.725 ************************************ 00:34:19.725 17:38:38 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:34:19.725 17:38:38 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:19.725 17:38:38 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:19.725 17:38:38 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:19.725 17:38:38 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:19.725 17:38:38 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:19.725 ************************************ 00:34:19.725 START TEST bdev_verify 00:34:19.725 ************************************ 00:34:19.725 17:38:38 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:19.984 [2024-07-22 17:38:38.800221] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:19.984 [2024-07-22 17:38:38.800494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127208 ] 00:34:20.243 [2024-07-22 17:38:38.973017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:20.508 [2024-07-22 17:38:39.287015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.508 [2024-07-22 17:38:39.287037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.849 [2024-07-22 17:38:42.406823] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:34:23.849 Running I/O for 5 seconds... 00:34:29.115 00:34:29.115 Latency(us) 00:34:29.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.115 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:29.115 Verification LBA range: start 0x0 length 0x1f400 00:34:29.115 Ceph0 : 5.02 2169.44 8.47 0.00 0.00 58853.14 2383.13 770226.73 00:34:29.115 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:29.115 Verification LBA range: start 0x1f400 length 0x1f400 00:34:29.115 Ceph0 : 5.03 2252.52 8.80 0.00 0.00 56535.11 4676.89 770226.73 00:34:29.115 =================================================================================================================== 00:34:29.115 Total : 4421.96 17.27 0.00 0.00 57671.58 2383.13 770226.73 00:34:30.067 00:34:30.067 real 0m10.202s 00:34:30.067 user 0m18.009s 00:34:30.067 sys 0m2.057s 00:34:30.067 17:38:48 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:30.067 17:38:48 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:30.067 ************************************ 00:34:30.067 END TEST bdev_verify 00:34:30.067 ************************************ 00:34:30.067 17:38:48 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:34:30.067 17:38:48 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:30.067 17:38:48 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:30.067 17:38:48 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:30.067 17:38:48 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:30.067 ************************************ 00:34:30.067 START TEST bdev_verify_big_io 00:34:30.067 ************************************ 00:34:30.067 17:38:48 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:30.326 [2024-07-22 17:38:49.051143] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:30.326 [2024-07-22 17:38:49.051349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127343 ] 00:34:30.326 [2024-07-22 17:38:49.216614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.584 [2024-07-22 17:38:49.471878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.584 [2024-07-22 17:38:49.471899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.152 [2024-07-22 17:38:49.924978] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:34:31.152 Running I/O for 5 seconds... 00:34:36.413 00:34:36.413 Latency(us) 00:34:36.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.413 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:36.413 Verification LBA range: start 0x0 length 0x1f40 00:34:36.413 Ceph0 : 5.09 638.92 39.93 0.00 0.00 195824.90 5689.72 314572.80 00:34:36.413 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:36.413 Verification LBA range: start 0x1f40 length 0x1f40 00:34:36.413 Ceph0 : 5.12 641.29 40.08 0.00 0.00 194730.82 5957.82 335544.32 00:34:36.413 =================================================================================================================== 00:34:36.413 Total : 1280.21 80.01 0.00 0.00 195275.09 5689.72 335544.32 00:34:37.788 00:34:37.788 real 0m7.492s 00:34:37.788 user 0m14.429s 00:34:37.788 sys 0m1.442s 00:34:37.788 17:38:56 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.788 17:38:56 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.788 ************************************ 00:34:37.788 END TEST bdev_verify_big_io 00:34:37.788 ************************************ 00:34:37.788 17:38:56 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:34:37.788 17:38:56 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:37.788 17:38:56 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:37.788 17:38:56 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.788 17:38:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:37.788 ************************************ 00:34:37.788 START TEST bdev_write_zeroes 00:34:37.788 ************************************ 00:34:37.788 17:38:56 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:37.788 [2024-07-22 17:38:56.633768] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:37.788 [2024-07-22 17:38:56.634044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127450 ] 00:34:38.063 [2024-07-22 17:38:56.817116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.346 [2024-07-22 17:38:57.134142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.912 [2024-07-22 17:38:57.610326] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:34:38.912 Running I/O for 1 seconds... 00:34:40.285 00:34:40.285 Latency(us) 00:34:40.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.285 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:40.285 Ceph0 : 1.43 3699.34 14.45 0.00 0.00 34462.04 6583.39 754974.72 00:34:40.285 =================================================================================================================== 00:34:40.285 Total : 3699.34 14.45 0.00 0.00 34462.04 6583.39 754974.72 00:34:41.741 00:34:41.741 real 0m3.865s 00:34:41.741 user 0m3.760s 00:34:41.741 sys 0m0.706s 00:34:41.741 17:39:00 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.741 17:39:00 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:41.741 ************************************ 00:34:41.741 END TEST bdev_write_zeroes 00:34:41.741 ************************************ 00:34:41.741 17:39:00 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:34:41.741 17:39:00 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:41.741 17:39:00 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:41.741 17:39:00 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.741 17:39:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:41.741 ************************************ 00:34:41.741 START TEST bdev_json_nonenclosed 00:34:41.741 ************************************ 00:34:41.741 17:39:00 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:41.741 [2024-07-22 17:39:00.535603] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:41.741 [2024-07-22 17:39:00.535817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127528 ] 00:34:41.999 [2024-07-22 17:39:00.712827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.256 [2024-07-22 17:39:01.002466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.256 [2024-07-22 17:39:01.002587] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:42.256 [2024-07-22 17:39:01.002631] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:42.256 [2024-07-22 17:39:01.002649] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:42.514 00:34:42.514 real 0m1.062s 00:34:42.514 user 0m0.796s 00:34:42.514 sys 0m0.159s 00:34:42.514 17:39:01 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:42.514 17:39:01 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.514 17:39:01 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:42.514 ************************************ 00:34:42.514 END TEST bdev_json_nonenclosed 00:34:42.514 ************************************ 00:34:42.772 17:39:01 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:34:42.772 17:39:01 blockdev_rbd -- bdev/blockdev.sh@781 -- # true 00:34:42.772 17:39:01 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.772 17:39:01 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:42.772 17:39:01 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:42.772 17:39:01 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:42.772 ************************************ 00:34:42.772 START TEST bdev_json_nonarray 00:34:42.772 ************************************ 00:34:42.772 17:39:01 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.772 [2024-07-22 17:39:01.651263] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:42.772 [2024-07-22 17:39:01.651471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127559 ] 00:34:43.030 [2024-07-22 17:39:01.825247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.287 [2024-07-22 17:39:02.085487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.287 [2024-07-22 17:39:02.085655] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:43.287 [2024-07-22 17:39:02.085706] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:43.287 [2024-07-22 17:39:02.085737] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:43.853 00:34:43.853 real 0m1.085s 00:34:43.853 user 0m0.812s 00:34:43.853 sys 0m0.165s 00:34:43.853 17:39:02 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:43.853 17:39:02 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.853 ************************************ 00:34:43.853 END TEST bdev_json_nonarray 00:34:43.853 ************************************ 00:34:43.853 17:39:02 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:43.853 17:39:02 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@784 -- # true 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:34:43.853 17:39:02 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:34:43.853 17:39:02 blockdev_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:34:43.853 17:39:02 blockdev_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:34:43.853 + base_dir=/var/tmp/ceph 00:34:43.853 + image=/var/tmp/ceph/ceph_raw.img 00:34:43.853 + dev=/dev/loop200 00:34:43.853 + pkill -9 ceph 00:34:43.853 + sleep 3 00:34:47.196 + umount /dev/loop200p2 00:34:47.196 + losetup -d /dev/loop200 00:34:47.196 + rm -rf /var/tmp/ceph 00:34:47.196 17:39:05 blockdev_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:34:47.196 17:39:06 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:34:47.196 17:39:06 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:34:47.196 17:39:06 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:34:47.196 00:34:47.196 real 1m29.580s 00:34:47.196 user 1m47.137s 00:34:47.196 sys 0m12.354s 00:34:47.196 17:39:06 blockdev_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.196 17:39:06 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:47.196 ************************************ 00:34:47.196 END TEST blockdev_rbd 00:34:47.196 ************************************ 00:34:47.196 17:39:06 -- common/autotest_common.sh@1142 -- # return 0 00:34:47.196 17:39:06 -- spdk/autotest.sh@332 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:34:47.196 17:39:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:47.196 17:39:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.196 17:39:06 -- common/autotest_common.sh@10 -- # set +x 00:34:47.196 ************************************ 00:34:47.196 START TEST spdkcli_rbd 00:34:47.196 ************************************ 00:34:47.196 17:39:06 spdkcli_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:34:47.454 * Looking for test storage... 00:34:47.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=127680 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:34:47.454 17:39:06 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 127680 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@829 -- # '[' -z 127680 ']' 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.454 17:39:06 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:47.454 [2024-07-22 17:39:06.328062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:47.454 [2024-07-22 17:39:06.328275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127680 ] 00:34:47.712 [2024-07-22 17:39:06.493184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:47.971 [2024-07-22 17:39:06.753846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.971 [2024-07-22 17:39:06.753859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.905 17:39:07 spdkcli_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.905 17:39:07 spdkcli_rbd -- common/autotest_common.sh@862 -- # return 0 00:34:48.905 17:39:07 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:34:48.905 17:39:07 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:48.905 17:39:07 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:48.905 17:39:07 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:34:48.905 17:39:07 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:48.906 17:39:07 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:34:48.906 17:39:07 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:34:48.906 17:39:07 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:34:48.906 17:39:07 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:34:48.906 + base_dir=/var/tmp/ceph 00:34:48.906 + image=/var/tmp/ceph/ceph_raw.img 00:34:48.906 + dev=/dev/loop200 00:34:48.906 + pkill -9 ceph 00:34:48.906 + sleep 3 00:34:52.213 + umount /dev/loop200p2 00:34:52.213 umount: /dev/loop200p2: no mount point specified. 00:34:52.213 + losetup -d /dev/loop200 00:34:52.213 losetup: /dev/loop200: detach failed: No such device or address 00:34:52.213 + rm -rf /var/tmp/ceph 00:34:52.213 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:34:52.213 17:39:10 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:34:52.214 17:39:10 spdkcli_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:34:52.214 + base_dir=/var/tmp/ceph 00:34:52.214 + image=/var/tmp/ceph/ceph_raw.img 00:34:52.214 + dev=/dev/loop200 00:34:52.214 + pkill -9 ceph 00:34:52.214 + sleep 3 00:34:54.744 + umount /dev/loop200p2 00:34:54.744 umount: /dev/loop200p2: no mount point specified. 00:34:54.744 + losetup -d /dev/loop200 00:34:54.744 losetup: /dev/loop200: detach failed: No such device or address 00:34:54.744 + rm -rf /var/tmp/ceph 00:34:54.744 17:39:13 spdkcli_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:34:55.002 + set -e 00:34:55.002 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:34:55.002 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:34:55.002 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:34:55.002 + base_dir=/var/tmp/ceph 00:34:55.002 + mon_ip=127.0.0.1 00:34:55.002 + mon_dir=/var/tmp/ceph/mon.a 00:34:55.002 + pid_dir=/var/tmp/ceph/pid 00:34:55.002 + ceph_conf=/var/tmp/ceph/ceph.conf 00:34:55.002 + mnt_dir=/var/tmp/ceph/mnt 00:34:55.002 + image=/var/tmp/ceph_raw.img 00:34:55.002 + dev=/dev/loop200 00:34:55.002 + modprobe loop 00:34:55.002 + umount /dev/loop200p2 00:34:55.002 umount: /dev/loop200p2: no mount point specified. 00:34:55.002 + true 00:34:55.002 + losetup -d /dev/loop200 00:34:55.002 losetup: /dev/loop200: detach failed: No such device or address 00:34:55.002 + true 00:34:55.002 + '[' -d /var/tmp/ceph ']' 00:34:55.002 + mkdir /var/tmp/ceph 00:34:55.002 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:34:55.002 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:34:55.002 + fallocate -l 4G /var/tmp/ceph_raw.img 00:34:55.002 + mknod /dev/loop200 b 7 200 00:34:55.002 mknod: /dev/loop200: File exists 00:34:55.002 + true 00:34:55.002 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:34:55.002 + PARTED='parted -s' 00:34:55.002 + SGDISK=sgdisk 00:34:55.002 Partitioning /dev/loop200 00:34:55.002 + echo 'Partitioning /dev/loop200' 00:34:55.002 + parted -s /dev/loop200 mktable gpt 00:34:55.002 + sleep 2 00:34:57.531 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:34:57.531 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:34:57.531 + partno=0 00:34:57.531 + echo 'Setting name on /dev/loop200' 00:34:57.531 Setting name on /dev/loop200 00:34:57.531 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:34:58.098 Warning: The kernel is still using the old partition table. 00:34:58.098 The new table will be used at the next reboot or after you 00:34:58.098 run partprobe(8) or kpartx(8) 00:34:58.098 The operation has completed successfully. 00:34:58.098 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:34:59.033 Warning: The kernel is still using the old partition table. 00:34:59.033 The new table will be used at the next reboot or after you 00:34:59.033 run partprobe(8) or kpartx(8) 00:34:59.033 The operation has completed successfully. 00:34:59.033 + kpartx /dev/loop200 00:34:59.033 loop200p1 : 0 4192256 /dev/loop200 2048 00:34:59.033 loop200p2 : 0 4192256 /dev/loop200 4194304 00:34:59.291 ++ ceph -v 00:34:59.291 ++ awk '{print $3}' 00:34:59.291 + ceph_version=17.2.7 00:34:59.291 + ceph_maj=17 00:34:59.291 + '[' 17 -gt 12 ']' 00:34:59.291 + update_config=true 00:34:59.291 + rm -f /var/log/ceph/ceph-mon.a.log 00:34:59.291 + set_min_mon_release='--set-min-mon-release 14' 00:34:59.291 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:34:59.291 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:34:59.291 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:34:59.291 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:34:59.291 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:34:59.291 = sectsz=512 attr=2, projid32bit=1 00:34:59.291 = crc=1 finobt=1, sparse=1, rmapbt=0 00:34:59.291 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:34:59.291 data = bsize=4096 blocks=524032, imaxpct=25 00:34:59.291 = sunit=0 swidth=0 blks 00:34:59.291 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:34:59.291 log =internal log bsize=4096 blocks=16384, version=2 00:34:59.291 = sectsz=512 sunit=0 blks, lazy-count=1 00:34:59.291 realtime =none extsz=4096 blocks=0, rtextents=0 00:34:59.291 Discarding blocks...Done. 00:34:59.291 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:34:59.291 + cat 00:34:59.291 + rm -rf '/var/tmp/ceph/mon.a/*' 00:34:59.291 + mkdir -p /var/tmp/ceph/mon.a 00:34:59.291 + mkdir -p /var/tmp/ceph/pid 00:34:59.291 + rm -f /etc/ceph/ceph.client.admin.keyring 00:34:59.291 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:34:59.291 creating /var/tmp/ceph/keyring 00:34:59.291 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:34:59.291 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:34:59.550 monmaptool: monmap file /var/tmp/ceph/monmap 00:34:59.550 monmaptool: generated fsid 6974e083-a64b-4027-b09f-bc286b7d2e8a 00:34:59.550 setting min_mon_release = octopus 00:34:59.550 epoch 0 00:34:59.550 fsid 6974e083-a64b-4027-b09f-bc286b7d2e8a 00:34:59.550 last_changed 2024-07-22T17:39:18.259116+0000 00:34:59.550 created 2024-07-22T17:39:18.259116+0000 00:34:59.550 min_mon_release 15 (octopus) 00:34:59.550 election_strategy: 1 00:34:59.550 0: v2:127.0.0.1:12046/0 mon.a 00:34:59.550 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:34:59.550 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:34:59.550 + '[' true = true ']' 00:34:59.550 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:34:59.550 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:34:59.550 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:34:59.550 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:34:59.550 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:34:59.550 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:34:59.550 ++ hostname 00:34:59.550 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:34:59.550 + true 00:34:59.550 + '[' true = true ']' 00:34:59.550 + ceph-conf --name mon.a --show-config-value log_file 00:34:59.550 /var/log/ceph/ceph-mon.a.log 00:34:59.550 ++ ceph -s 00:34:59.550 ++ grep id 00:34:59.550 ++ awk '{print $2}' 00:34:59.808 + fsid=6974e083-a64b-4027-b09f-bc286b7d2e8a 00:34:59.808 + sed -i 's/perf = true/perf = true\n\tfsid = 6974e083-a64b-4027-b09f-bc286b7d2e8a \n/g' /var/tmp/ceph/ceph.conf 00:34:59.808 + (( ceph_maj < 18 )) 00:34:59.808 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:34:59.808 + cat /var/tmp/ceph/ceph.conf 00:34:59.808 [global] 00:34:59.808 debug_lockdep = 0/0 00:34:59.808 debug_context = 0/0 00:34:59.808 debug_crush = 0/0 00:34:59.808 debug_buffer = 0/0 00:34:59.808 debug_timer = 0/0 00:34:59.808 debug_filer = 0/0 00:34:59.808 debug_objecter = 0/0 00:34:59.808 debug_rados = 0/0 00:34:59.808 debug_rbd = 0/0 00:34:59.808 debug_ms = 0/0 00:34:59.808 debug_monc = 0/0 00:34:59.808 debug_tp = 0/0 00:34:59.808 debug_auth = 0/0 00:34:59.808 debug_finisher = 0/0 00:34:59.808 debug_heartbeatmap = 0/0 00:34:59.808 debug_perfcounter = 0/0 00:34:59.808 debug_asok = 0/0 00:34:59.808 debug_throttle = 0/0 00:34:59.808 debug_mon = 0/0 00:34:59.808 debug_paxos = 0/0 00:34:59.808 debug_rgw = 0/0 00:34:59.808 00:34:59.808 perf = true 00:34:59.808 osd objectstore = filestore 00:34:59.808 00:34:59.808 fsid = 6974e083-a64b-4027-b09f-bc286b7d2e8a 00:34:59.808 00:34:59.808 mutex_perf_counter = false 00:34:59.808 throttler_perf_counter = false 00:34:59.808 rbd cache = false 00:34:59.808 mon_allow_pool_delete = true 00:34:59.808 00:34:59.808 osd_pool_default_size = 1 00:34:59.808 00:34:59.808 [mon] 00:34:59.808 mon_max_pool_pg_num=166496 00:34:59.808 mon_osd_max_split_count = 10000 00:34:59.808 mon_pg_warn_max_per_osd = 10000 00:34:59.808 00:34:59.808 [osd] 00:34:59.808 osd_op_threads = 64 00:34:59.808 filestore_queue_max_ops=5000 00:34:59.808 filestore_queue_committing_max_ops=5000 00:34:59.808 journal_max_write_entries=1000 00:34:59.808 journal_queue_max_ops=3000 00:34:59.808 objecter_inflight_ops=102400 00:34:59.808 filestore_wbthrottle_enable=false 00:34:59.808 filestore_queue_max_bytes=1048576000 00:34:59.808 filestore_queue_committing_max_bytes=1048576000 00:34:59.808 journal_max_write_bytes=1048576000 00:34:59.808 journal_queue_max_bytes=1048576000 00:34:59.808 ms_dispatch_throttle_bytes=1048576000 00:34:59.808 objecter_inflight_op_bytes=1048576000 00:34:59.808 filestore_max_sync_interval=10 00:34:59.808 osd_client_message_size_cap = 0 00:34:59.808 osd_client_message_cap = 0 00:34:59.808 osd_enable_op_tracker = false 00:34:59.808 filestore_fd_cache_size = 10240 00:34:59.808 filestore_fd_cache_shards = 64 00:34:59.808 filestore_op_threads = 16 00:34:59.808 osd_op_num_shards = 48 00:34:59.808 osd_op_num_threads_per_shard = 2 00:34:59.808 osd_pg_object_context_cache_count = 10240 00:34:59.808 filestore_odsync_write = True 00:34:59.808 journal_dynamic_throttle = True 00:34:59.808 00:34:59.808 [osd.0] 00:34:59.808 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:34:59.808 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:34:59.808 00:34:59.808 # add mon address 00:34:59.808 [mon.a] 00:34:59.808 mon addr = v2:127.0.0.1:12046 00:34:59.808 + i=0 00:34:59.808 + mkdir -p /var/tmp/ceph/mnt 00:34:59.808 ++ uuidgen 00:34:59.808 + uuid=af4d2ea2-8ef9-4494-8dd3-3790e6e41025 00:34:59.808 + ceph -c /var/tmp/ceph/ceph.conf osd create af4d2ea2-8ef9-4494-8dd3-3790e6e41025 0 00:35:00.375 0 00:35:00.375 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid af4d2ea2-8ef9-4494-8dd3-3790e6e41025 --check-needs-journal --no-mon-config 00:35:00.375 2024-07-22T17:39:19.112+0000 7f36c5066400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:35:00.375 2024-07-22T17:39:19.113+0000 7f36c5066400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:35:00.375 2024-07-22T17:39:19.156+0000 7f36c5066400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected af4d2ea2-8ef9-4494-8dd3-3790e6e41025, invalid (someone else's?) journal 00:35:00.375 2024-07-22T17:39:19.186+0000 7f36c5066400 -1 journal do_read_entry(4096): bad header magic 00:35:00.375 2024-07-22T17:39:19.186+0000 7f36c5066400 -1 journal do_read_entry(4096): bad header magic 00:35:00.375 ++ hostname 00:35:00.375 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:35:01.751 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:35:01.751 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:35:02.010 added key for osd.0 00:35:02.010 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:35:02.267 + class_dir=/lib64/rados-classes 00:35:02.267 + [[ -e /lib64/rados-classes ]] 00:35:02.267 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:35:02.833 + pkill -9 ceph-osd 00:35:02.833 + true 00:35:02.833 + sleep 2 00:35:04.734 + mkdir -p /var/tmp/ceph/pid 00:35:04.734 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:35:04.734 2024-07-22T17:39:23.604+0000 7f58f5e54400 -1 Falling back to public interface 00:35:04.734 2024-07-22T17:39:23.648+0000 7f58f5e54400 -1 journal do_read_entry(8192): bad header magic 00:35:04.734 2024-07-22T17:39:23.648+0000 7f58f5e54400 -1 journal do_read_entry(8192): bad header magic 00:35:04.734 2024-07-22T17:39:23.658+0000 7f58f5e54400 -1 osd.0 0 log_to_monitors true 00:35:05.665 17:39:24 spdkcli_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:35:07.040 pool 'rbd' created 00:35:07.040 17:39:25 spdkcli_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:35:10.324 17:39:28 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:35:10.324 timing_exit spdkcli_create_rbd_config 00:35:10.324 00:35:10.324 timing_enter spdkcli_check_match 00:35:10.324 check_match 00:35:10.324 timing_exit spdkcli_check_match 00:35:10.324 00:35:10.324 timing_enter spdkcli_clear_rbd_config 00:35:10.324 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:35:10.582 Executing command: [' ', True] 00:35:10.582 17:39:29 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:35:10.582 17:39:29 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:35:10.582 17:39:29 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:35:10.582 + base_dir=/var/tmp/ceph 00:35:10.582 + image=/var/tmp/ceph/ceph_raw.img 00:35:10.582 + dev=/dev/loop200 00:35:10.582 + pkill -9 ceph 00:35:10.582 + sleep 3 00:35:13.886 + umount /dev/loop200p2 00:35:13.886 + losetup -d /dev/loop200 00:35:13.886 + rm -rf /var/tmp/ceph 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:35:13.886 17:39:32 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:35:13.886 17:39:32 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 127680 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 127680 ']' 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 127680 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@953 -- # uname 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127680 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:13.886 killing process with pid 127680 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127680' 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@967 -- # kill 127680 00:35:13.886 17:39:32 spdkcli_rbd -- common/autotest_common.sh@972 -- # wait 127680 00:35:16.417 17:39:34 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:35:16.417 17:39:34 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:35:16.417 17:39:34 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:35:16.417 + base_dir=/var/tmp/ceph 00:35:16.417 + image=/var/tmp/ceph/ceph_raw.img 00:35:16.417 + dev=/dev/loop200 00:35:16.417 + pkill -9 ceph 00:35:16.417 + sleep 3 00:35:18.974 + umount /dev/loop200p2 00:35:18.974 umount: /dev/loop200p2: no mount point specified. 00:35:18.974 + losetup -d /dev/loop200 00:35:19.233 losetup: /dev/loop200: detach failed: No such device or address 00:35:19.233 + rm -rf /var/tmp/ceph 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 127680 ']' 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 127680 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 127680 ']' 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 127680 00:35:19.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (127680) - No such process 00:35:19.233 Process with pid 127680 is not found 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@975 -- # echo 'Process with pid 127680 is not found' 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:19.233 17:39:37 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:19.233 00:35:19.233 real 0m31.829s 00:35:19.233 user 0m58.576s 00:35:19.233 sys 0m1.569s 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:19.233 17:39:37 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:35:19.233 ************************************ 00:35:19.233 END TEST spdkcli_rbd 00:35:19.233 ************************************ 00:35:19.233 17:39:37 -- common/autotest_common.sh@1142 -- # return 0 00:35:19.233 17:39:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:35:19.233 17:39:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:35:19.233 17:39:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:19.233 17:39:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:19.233 17:39:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:19.233 17:39:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:35:19.233 17:39:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:35:19.233 17:39:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:19.233 17:39:37 -- common/autotest_common.sh@10 -- # set +x 00:35:19.233 17:39:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:35:19.233 17:39:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:19.233 17:39:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:19.233 17:39:37 -- common/autotest_common.sh@10 -- # set +x 00:35:20.607 INFO: APP EXITING 00:35:20.607 INFO: killing all VMs 00:35:20.607 INFO: killing vhost app 00:35:20.607 INFO: EXIT DONE 00:35:21.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:21.173 Waiting for block devices as requested 00:35:21.173 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:21.173 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:21.739 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:35:21.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:21.997 Cleaning 00:35:21.997 Removing: /var/run/dpdk/spdk0/config 00:35:21.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:21.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:21.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:21.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:21.997 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:21.997 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:21.997 Removing: /var/run/dpdk/spdk1/config 00:35:21.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:21.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:21.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:21.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:21.997 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:21.997 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:21.997 Removing: /dev/shm/iscsi_trace.pid78406 00:35:21.997 Removing: /dev/shm/spdk_tgt_trace.pid58964 00:35:21.997 Removing: /var/run/dpdk/spdk0 00:35:21.997 Removing: /var/run/dpdk/spdk1 00:35:21.997 Removing: /var/run/dpdk/spdk_pid123664 00:35:21.997 Removing: /var/run/dpdk/spdk_pid123981 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124045 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124137 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124212 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124290 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124487 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124531 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124571 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124604 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124638 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124741 00:35:21.997 Removing: /var/run/dpdk/spdk_pid124790 00:35:21.997 Removing: /var/run/dpdk/spdk_pid125026 00:35:21.997 Removing: /var/run/dpdk/spdk_pid125348 00:35:21.997 Removing: /var/run/dpdk/spdk_pid125605 00:35:21.997 Removing: /var/run/dpdk/spdk_pid126488 00:35:21.997 Removing: /var/run/dpdk/spdk_pid126545 00:35:21.997 Removing: /var/run/dpdk/spdk_pid126844 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127024 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127208 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127343 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127450 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127528 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127559 00:35:21.997 Removing: /var/run/dpdk/spdk_pid127680 00:35:21.997 Removing: /var/run/dpdk/spdk_pid58742 00:35:21.997 Removing: /var/run/dpdk/spdk_pid58964 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59196 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59300 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59356 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59484 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59512 00:35:21.997 Removing: /var/run/dpdk/spdk_pid59662 00:35:22.256 Removing: /var/run/dpdk/spdk_pid59862 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60060 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60163 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60266 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60386 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60486 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60526 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60562 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60630 00:35:22.256 Removing: /var/run/dpdk/spdk_pid60747 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61205 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61280 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61356 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61378 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61532 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61554 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61703 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61730 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61794 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61818 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61887 00:35:22.256 Removing: /var/run/dpdk/spdk_pid61905 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62092 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62133 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62210 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62291 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62333 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62411 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62452 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62504 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62551 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62597 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62644 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62690 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62737 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62789 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62830 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62882 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62929 00:35:22.256 Removing: /var/run/dpdk/spdk_pid62975 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63026 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63068 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63119 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63167 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63211 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63266 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63317 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63360 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63442 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63564 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63908 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63939 00:35:22.256 Removing: /var/run/dpdk/spdk_pid63970 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64020 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64025 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64054 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64082 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64098 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64149 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64175 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64233 00:35:22.256 Removing: /var/run/dpdk/spdk_pid64326 00:35:22.256 Removing: /var/run/dpdk/spdk_pid65115 00:35:22.256 Removing: /var/run/dpdk/spdk_pid66970 00:35:22.256 Removing: /var/run/dpdk/spdk_pid67269 00:35:22.256 Removing: /var/run/dpdk/spdk_pid67595 00:35:22.256 Removing: /var/run/dpdk/spdk_pid67856 00:35:22.256 Removing: /var/run/dpdk/spdk_pid68524 00:35:22.256 Removing: /var/run/dpdk/spdk_pid73249 00:35:22.256 Removing: /var/run/dpdk/spdk_pid77271 00:35:22.256 Removing: /var/run/dpdk/spdk_pid78036 00:35:22.256 Removing: /var/run/dpdk/spdk_pid78079 00:35:22.256 Removing: /var/run/dpdk/spdk_pid78406 00:35:22.256 Removing: /var/run/dpdk/spdk_pid79814 00:35:22.256 Removing: /var/run/dpdk/spdk_pid80219 00:35:22.256 Removing: /var/run/dpdk/spdk_pid80275 00:35:22.256 Removing: /var/run/dpdk/spdk_pid80676 00:35:22.256 Removing: /var/run/dpdk/spdk_pid83133 00:35:22.256 Clean 00:35:22.515 17:39:41 -- common/autotest_common.sh@1451 -- # return 0 00:35:22.515 17:39:41 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:35:22.515 17:39:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:22.515 17:39:41 -- common/autotest_common.sh@10 -- # set +x 00:35:22.515 17:39:41 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:35:22.515 17:39:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:22.515 17:39:41 -- common/autotest_common.sh@10 -- # set +x 00:35:22.515 17:39:41 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:22.515 17:39:41 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:22.515 17:39:41 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:22.515 17:39:41 -- spdk/autotest.sh@391 -- # hash lcov 00:35:22.515 17:39:41 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:22.515 17:39:41 -- spdk/autotest.sh@393 -- # hostname 00:35:22.515 17:39:41 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:22.773 geninfo: WARNING: invalid characters removed from testname! 00:35:54.846 17:40:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:54.846 17:40:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:56.743 17:40:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:59.270 17:40:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:01.801 17:40:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:04.331 17:40:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:07.619 17:40:25 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:07.619 17:40:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:07.619 17:40:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:07.619 17:40:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.619 17:40:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.619 17:40:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.619 17:40:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.619 17:40:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.619 17:40:25 -- paths/export.sh@5 -- $ export PATH 00:36:07.619 17:40:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.619 17:40:25 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:07.619 17:40:25 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:07.619 17:40:25 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721670025.XXXXXX 00:36:07.619 17:40:25 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721670025.aABJOb 00:36:07.619 17:40:25 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:07.619 17:40:25 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:36:07.619 17:40:25 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:36:07.619 17:40:25 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:07.619 17:40:25 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:07.619 17:40:25 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:07.620 17:40:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:07.620 17:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:36:07.620 17:40:25 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:36:07.620 17:40:25 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:07.620 17:40:25 -- pm/common@17 -- $ local monitor 00:36:07.620 17:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.620 17:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.620 17:40:25 -- pm/common@25 -- $ sleep 1 00:36:07.620 17:40:25 -- pm/common@21 -- $ date +%s 00:36:07.620 17:40:25 -- pm/common@21 -- $ date +%s 00:36:07.620 17:40:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721670025 00:36:07.620 17:40:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721670025 00:36:07.620 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721670025_collect-vmstat.pm.log 00:36:07.620 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721670025_collect-cpu-load.pm.log 00:36:08.186 17:40:26 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:08.186 17:40:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:36:08.186 17:40:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:36:08.186 17:40:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:08.186 17:40:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:08.186 17:40:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:08.186 17:40:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:08.186 17:40:26 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:08.186 17:40:26 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:08.186 17:40:26 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:08.186 17:40:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:08.186 17:40:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:08.186 17:40:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:08.186 17:40:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:08.186 17:40:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:08.186 17:40:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:08.186 17:40:27 -- pm/common@44 -- $ pid=130163 00:36:08.186 17:40:27 -- pm/common@50 -- $ kill -TERM 130163 00:36:08.186 17:40:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:08.186 17:40:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:08.186 17:40:27 -- pm/common@44 -- $ pid=130165 00:36:08.186 17:40:27 -- pm/common@50 -- $ kill -TERM 130165 00:36:08.186 + [[ -n 5149 ]] 00:36:08.186 + sudo kill 5149 00:36:08.198 [Pipeline] } 00:36:08.218 [Pipeline] // timeout 00:36:08.224 [Pipeline] } 00:36:08.242 [Pipeline] // stage 00:36:08.248 [Pipeline] } 00:36:08.265 [Pipeline] // catchError 00:36:08.273 [Pipeline] stage 00:36:08.276 [Pipeline] { (Stop VM) 00:36:08.289 [Pipeline] sh 00:36:08.566 + vagrant halt 00:36:12.803 ==> default: Halting domain... 00:36:18.079 [Pipeline] sh 00:36:18.384 + vagrant destroy -f 00:36:21.665 ==> default: Removing domain... 00:36:22.243 [Pipeline] sh 00:36:22.521 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:36:22.529 [Pipeline] } 00:36:22.548 [Pipeline] // stage 00:36:22.554 [Pipeline] } 00:36:22.568 [Pipeline] // dir 00:36:22.573 [Pipeline] } 00:36:22.589 [Pipeline] // wrap 00:36:22.594 [Pipeline] } 00:36:22.609 [Pipeline] // catchError 00:36:22.618 [Pipeline] stage 00:36:22.619 [Pipeline] { (Epilogue) 00:36:22.630 [Pipeline] sh 00:36:22.909 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:31.039 [Pipeline] catchError 00:36:31.042 [Pipeline] { 00:36:31.058 [Pipeline] sh 00:36:31.343 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:31.601 Artifacts sizes are good 00:36:31.609 [Pipeline] } 00:36:31.625 [Pipeline] // catchError 00:36:31.636 [Pipeline] archiveArtifacts 00:36:31.643 Archiving artifacts 00:36:32.523 [Pipeline] cleanWs 00:36:32.533 [WS-CLEANUP] Deleting project workspace... 00:36:32.533 [WS-CLEANUP] Deferred wipeout is used... 00:36:32.539 [WS-CLEANUP] done 00:36:32.541 [Pipeline] } 00:36:32.559 [Pipeline] // stage 00:36:32.565 [Pipeline] } 00:36:32.582 [Pipeline] // node 00:36:32.588 [Pipeline] End of Pipeline 00:36:32.611 Finished: SUCCESS