00:00:00.001 Started by upstream project "autotest-nightly" build number 3910 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3288 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.150 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.151 The recommended git tool is: git 00:00:00.151 using credential 00000000-0000-0000-0000-000000000002 00:00:00.153 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.211 Fetching changes from the remote Git repository 00:00:00.213 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.246 Using shallow fetch with depth 1 00:00:00.246 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.246 > git --version # timeout=10 00:00:00.277 > git --version # 'git version 2.39.2' 00:00:00.277 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.299 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.299 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.411 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.423 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.436 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:05.436 > git config core.sparsecheckout # timeout=10 00:00:05.448 > git read-tree -mu HEAD # timeout=10 00:00:05.466 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:05.485 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:05.485 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:05.633 [Pipeline] Start of Pipeline 00:00:05.646 [Pipeline] library 00:00:05.647 Loading library shm_lib@master 00:00:05.647 Library shm_lib@master is cached. Copying from home. 00:00:05.660 [Pipeline] node 00:00:20.662 Still waiting to schedule task 00:00:20.662 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:09.352 Running on VM-host-SM16 in /var/jenkins/workspace/iscsi-vg-autotest 00:11:09.354 [Pipeline] { 00:11:09.362 [Pipeline] catchError 00:11:09.363 [Pipeline] { 00:11:09.377 [Pipeline] wrap 00:11:09.387 [Pipeline] { 00:11:09.393 [Pipeline] stage 00:11:09.395 [Pipeline] { (Prologue) 00:11:09.409 [Pipeline] echo 00:11:09.410 Node: VM-host-SM16 00:11:09.414 [Pipeline] cleanWs 00:11:09.426 [WS-CLEANUP] Deleting project workspace... 00:11:09.426 [WS-CLEANUP] Deferred wipeout is used... 00:11:09.432 [WS-CLEANUP] done 00:11:09.706 [Pipeline] setCustomBuildProperty 00:11:09.812 [Pipeline] httpRequest 00:11:09.831 [Pipeline] echo 00:11:09.832 Sorcerer 10.211.164.101 is alive 00:11:09.841 [Pipeline] httpRequest 00:11:09.844 HttpMethod: GET 00:11:09.844 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:11:09.845 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:11:09.846 Response Code: HTTP/1.1 200 OK 00:11:09.847 Success: Status code 200 is in the accepted range: 200,404 00:11:09.847 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:11:09.991 [Pipeline] sh 00:11:10.271 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:11:10.285 [Pipeline] httpRequest 00:11:10.302 [Pipeline] echo 00:11:10.303 Sorcerer 10.211.164.101 is alive 00:11:10.310 [Pipeline] httpRequest 00:11:10.313 HttpMethod: GET 00:11:10.314 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:11:10.314 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:11:10.316 Response Code: HTTP/1.1 200 OK 00:11:10.317 Success: Status code 200 is in the accepted range: 200,404 00:11:10.317 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:11:12.450 [Pipeline] sh 00:11:12.728 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:11:16.067 [Pipeline] sh 00:11:16.343 + git -C spdk log --oneline -n5 00:11:16.343 f7b31b2b9 log: declare g_deprecation_epoch static 00:11:16.343 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:11:16.343 3731556bd lvol: declare g_lvol_if static 00:11:16.343 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:11:16.343 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:11:16.362 [Pipeline] writeFile 00:11:16.379 [Pipeline] sh 00:11:16.658 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:16.679 [Pipeline] sh 00:11:16.999 + cat autorun-spdk.conf 00:11:16.999 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:16.999 SPDK_TEST_ISCSI_INITIATOR=1 00:11:16.999 SPDK_TEST_ISCSI=1 00:11:16.999 SPDK_TEST_RBD=1 00:11:16.999 SPDK_RUN_ASAN=1 00:11:16.999 SPDK_RUN_UBSAN=1 00:11:16.999 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:17.005 RUN_NIGHTLY=1 00:11:17.007 [Pipeline] } 00:11:17.023 [Pipeline] // stage 00:11:17.043 [Pipeline] stage 00:11:17.045 [Pipeline] { (Run VM) 00:11:17.061 [Pipeline] sh 00:11:17.339 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:17.339 + echo 'Start stage prepare_nvme.sh' 00:11:17.339 Start stage prepare_nvme.sh 00:11:17.339 + [[ -n 7 ]] 00:11:17.339 + disk_prefix=ex7 00:11:17.339 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:11:17.339 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:11:17.340 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:11:17.340 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:17.340 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:11:17.340 ++ SPDK_TEST_ISCSI=1 00:11:17.340 ++ SPDK_TEST_RBD=1 00:11:17.340 ++ SPDK_RUN_ASAN=1 00:11:17.340 ++ SPDK_RUN_UBSAN=1 00:11:17.340 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:17.340 ++ RUN_NIGHTLY=1 00:11:17.340 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:11:17.340 + nvme_files=() 00:11:17.340 + declare -A nvme_files 00:11:17.340 + backend_dir=/var/lib/libvirt/images/backends 00:11:17.340 + nvme_files['nvme.img']=5G 00:11:17.340 + nvme_files['nvme-cmb.img']=5G 00:11:17.340 + nvme_files['nvme-multi0.img']=4G 00:11:17.340 + nvme_files['nvme-multi1.img']=4G 00:11:17.340 + nvme_files['nvme-multi2.img']=4G 00:11:17.340 + nvme_files['nvme-openstack.img']=8G 00:11:17.340 + nvme_files['nvme-zns.img']=5G 00:11:17.340 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:17.340 + (( SPDK_TEST_FTL == 1 )) 00:11:17.340 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:17.340 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:17.340 + for nvme in "${!nvme_files[@]}" 00:11:17.340 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:11:17.340 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:17.340 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:11:17.340 + echo 'End stage prepare_nvme.sh' 00:11:17.340 End stage prepare_nvme.sh 00:11:17.350 [Pipeline] sh 00:11:17.630 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:17.630 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:11:17.630 00:11:17.630 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:11:17.630 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:11:17.630 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:11:17.630 HELP=0 00:11:17.630 DRY_RUN=0 00:11:17.630 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:11:17.630 NVME_DISKS_TYPE=nvme,nvme, 00:11:17.630 NVME_AUTO_CREATE=0 00:11:17.630 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:11:17.630 NVME_CMB=,, 00:11:17.630 NVME_PMR=,, 00:11:17.630 NVME_ZNS=,, 00:11:17.630 NVME_MS=,, 00:11:17.630 NVME_FDP=,, 00:11:17.630 SPDK_VAGRANT_DISTRO=fedora38 00:11:17.630 SPDK_VAGRANT_VMCPU=10 00:11:17.630 SPDK_VAGRANT_VMRAM=12288 00:11:17.630 SPDK_VAGRANT_PROVIDER=libvirt 00:11:17.630 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:11:17.630 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:17.630 SPDK_OPENSTACK_NETWORK=0 00:11:17.630 VAGRANT_PACKAGE_BOX=0 00:11:17.630 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:17.630 FORCE_DISTRO=true 00:11:17.630 VAGRANT_BOX_VERSION= 00:11:17.630 EXTRA_VAGRANTFILES= 00:11:17.630 NIC_MODEL=e1000 00:11:17.630 00:11:17.630 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:11:17.630 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:11:20.913 Bringing machine 'default' up with 'libvirt' provider... 00:11:21.478 ==> default: Creating image (snapshot of base box volume). 00:11:22.413 ==> default: Creating domain with the following settings... 00:11:22.413 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721666962_5025629978a0e70dd2e1 00:11:22.413 ==> default: -- Domain type: kvm 00:11:22.413 ==> default: -- Cpus: 10 00:11:22.413 ==> default: -- Feature: acpi 00:11:22.413 ==> default: -- Feature: apic 00:11:22.413 ==> default: -- Feature: pae 00:11:22.413 ==> default: -- Memory: 12288M 00:11:22.413 ==> default: -- Memory Backing: hugepages: 00:11:22.413 ==> default: -- Management MAC: 00:11:22.413 ==> default: -- Loader: 00:11:22.413 ==> default: -- Nvram: 00:11:22.413 ==> default: -- Base box: spdk/fedora38 00:11:22.413 ==> default: -- Storage pool: default 00:11:22.413 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721666962_5025629978a0e70dd2e1.img (20G) 00:11:22.413 ==> default: -- Volume Cache: default 00:11:22.413 ==> default: -- Kernel: 00:11:22.413 ==> default: -- Initrd: 00:11:22.413 ==> default: -- Graphics Type: vnc 00:11:22.413 ==> default: -- Graphics Port: -1 00:11:22.413 ==> default: -- Graphics IP: 127.0.0.1 00:11:22.413 ==> default: -- Graphics Password: Not defined 00:11:22.413 ==> default: -- Video Type: cirrus 00:11:22.413 ==> default: -- Video VRAM: 9216 00:11:22.413 ==> default: -- Sound Type: 00:11:22.413 ==> default: -- Keymap: en-us 00:11:22.413 ==> default: -- TPM Path: 00:11:22.413 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:22.413 ==> default: -- Command line args: 00:11:22.413 ==> default: -> value=-device, 00:11:22.413 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:22.413 ==> default: -> value=-drive, 00:11:22.413 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:11:22.413 ==> default: -> value=-device, 00:11:22.413 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:22.413 ==> default: -> value=-device, 00:11:22.413 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:22.413 ==> default: -> value=-drive, 00:11:22.413 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:22.413 ==> default: -> value=-device, 00:11:22.413 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:22.413 ==> default: -> value=-drive, 00:11:22.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:22.414 ==> default: -> value=-device, 00:11:22.414 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:22.414 ==> default: -> value=-drive, 00:11:22.414 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:22.414 ==> default: -> value=-device, 00:11:22.414 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:22.414 ==> default: Creating shared folders metadata... 00:11:22.414 ==> default: Starting domain. 00:11:24.320 ==> default: Waiting for domain to get an IP address... 00:11:42.478 ==> default: Waiting for SSH to become available... 00:11:42.478 ==> default: Configuring and enabling network interfaces... 00:11:46.661 default: SSH address: 192.168.121.97:22 00:11:46.661 default: SSH username: vagrant 00:11:46.661 default: SSH auth method: private key 00:11:48.560 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:56.668 ==> default: Mounting SSHFS shared folder... 00:11:57.637 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:11:57.637 ==> default: Checking Mount.. 00:11:59.011 ==> default: Folder Successfully Mounted! 00:11:59.011 ==> default: Running provisioner: file... 00:11:59.576 default: ~/.gitconfig => .gitconfig 00:12:00.172 00:12:00.172 SUCCESS! 00:12:00.172 00:12:00.172 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:12:00.172 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:00.172 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:12:00.172 00:12:00.182 [Pipeline] } 00:12:00.200 [Pipeline] // stage 00:12:00.207 [Pipeline] dir 00:12:00.207 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:12:00.208 [Pipeline] { 00:12:00.217 [Pipeline] catchError 00:12:00.218 [Pipeline] { 00:12:00.230 [Pipeline] sh 00:12:00.509 + vagrant ssh-config --host vagrant 00:12:00.519 + sed -ne /^Host/,$p 00:12:00.519 + tee ssh_conf 00:12:04.704 Host vagrant 00:12:04.704 HostName 192.168.121.97 00:12:04.704 User vagrant 00:12:04.704 Port 22 00:12:04.704 UserKnownHostsFile /dev/null 00:12:04.704 StrictHostKeyChecking no 00:12:04.704 PasswordAuthentication no 00:12:04.704 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:12:04.704 IdentitiesOnly yes 00:12:04.704 LogLevel FATAL 00:12:04.704 ForwardAgent yes 00:12:04.704 ForwardX11 yes 00:12:04.704 00:12:04.719 [Pipeline] withEnv 00:12:04.722 [Pipeline] { 00:12:04.738 [Pipeline] sh 00:12:05.018 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:05.018 source /etc/os-release 00:12:05.018 [[ -e /image.version ]] && img=$(< /image.version) 00:12:05.018 # Minimal, systemd-like check. 00:12:05.018 if [[ -e /.dockerenv ]]; then 00:12:05.018 # Clear garbage from the node's name: 00:12:05.018 # agt-er_autotest_547-896 -> autotest_547-896 00:12:05.018 # $HOSTNAME is the actual container id 00:12:05.018 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:05.018 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:12:05.018 # We can assume this is a mount from a host where container is running, 00:12:05.018 # so fetch its hostname to easily identify the target swarm worker. 00:12:05.018 container="$(< /etc/hostname) ($agent)" 00:12:05.018 else 00:12:05.018 # Fallback 00:12:05.018 container=$agent 00:12:05.018 fi 00:12:05.018 fi 00:12:05.018 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:05.018 00:12:05.287 [Pipeline] } 00:12:05.306 [Pipeline] // withEnv 00:12:05.314 [Pipeline] setCustomBuildProperty 00:12:05.332 [Pipeline] stage 00:12:05.334 [Pipeline] { (Tests) 00:12:05.352 [Pipeline] sh 00:12:05.676 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:05.944 [Pipeline] sh 00:12:06.223 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:12:06.493 [Pipeline] timeout 00:12:06.493 Timeout set to expire in 45 min 00:12:06.494 [Pipeline] { 00:12:06.507 [Pipeline] sh 00:12:06.785 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:07.350 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:12:07.362 [Pipeline] sh 00:12:07.690 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:07.704 [Pipeline] sh 00:12:07.980 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:08.252 [Pipeline] sh 00:12:08.530 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:12:08.788 ++ readlink -f spdk_repo 00:12:08.788 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:08.788 + [[ -n /home/vagrant/spdk_repo ]] 00:12:08.788 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:08.788 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:08.788 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:08.788 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:08.788 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:08.788 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:12:08.788 + cd /home/vagrant/spdk_repo 00:12:08.788 + source /etc/os-release 00:12:08.788 ++ NAME='Fedora Linux' 00:12:08.788 ++ VERSION='38 (Cloud Edition)' 00:12:08.788 ++ ID=fedora 00:12:08.788 ++ VERSION_ID=38 00:12:08.788 ++ VERSION_CODENAME= 00:12:08.788 ++ PLATFORM_ID=platform:f38 00:12:08.788 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:12:08.788 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:08.788 ++ LOGO=fedora-logo-icon 00:12:08.788 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:12:08.788 ++ HOME_URL=https://fedoraproject.org/ 00:12:08.788 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:12:08.788 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:08.788 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:08.788 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:08.788 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:12:08.788 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:08.788 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:12:08.788 ++ SUPPORT_END=2024-05-14 00:12:08.788 ++ VARIANT='Cloud Edition' 00:12:08.788 ++ VARIANT_ID=cloud 00:12:08.788 + uname -a 00:12:08.788 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:12:08.788 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:09.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.354 Hugepages 00:12:09.354 node hugesize free / total 00:12:09.354 node0 1048576kB 0 / 0 00:12:09.354 node0 2048kB 0 / 0 00:12:09.354 00:12:09.355 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:09.355 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:09.355 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:09.355 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:12:09.355 + rm -f /tmp/spdk-ld-path 00:12:09.355 + source autorun-spdk.conf 00:12:09.355 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:09.355 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:12:09.355 ++ SPDK_TEST_ISCSI=1 00:12:09.355 ++ SPDK_TEST_RBD=1 00:12:09.355 ++ SPDK_RUN_ASAN=1 00:12:09.355 ++ SPDK_RUN_UBSAN=1 00:12:09.355 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:09.355 ++ RUN_NIGHTLY=1 00:12:09.355 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:09.355 + [[ -n '' ]] 00:12:09.355 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:09.355 + for M in /var/spdk/build-*-manifest.txt 00:12:09.355 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:09.355 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:09.355 + for M in /var/spdk/build-*-manifest.txt 00:12:09.355 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:09.355 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:09.355 ++ uname 00:12:09.355 + [[ Linux == \L\i\n\u\x ]] 00:12:09.355 + sudo dmesg -T 00:12:09.355 + sudo dmesg --clear 00:12:09.355 + dmesg_pid=5272 00:12:09.355 + [[ Fedora Linux == FreeBSD ]] 00:12:09.355 + sudo dmesg -Tw 00:12:09.355 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:09.355 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:09.355 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:09.355 + [[ -x /usr/src/fio-static/fio ]] 00:12:09.355 + export FIO_BIN=/usr/src/fio-static/fio 00:12:09.355 + FIO_BIN=/usr/src/fio-static/fio 00:12:09.355 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:09.355 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:09.355 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:09.355 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:09.355 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:09.355 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:09.355 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:09.355 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:09.355 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:09.355 Test configuration: 00:12:09.355 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:09.355 SPDK_TEST_ISCSI_INITIATOR=1 00:12:09.355 SPDK_TEST_ISCSI=1 00:12:09.355 SPDK_TEST_RBD=1 00:12:09.355 SPDK_RUN_ASAN=1 00:12:09.355 SPDK_RUN_UBSAN=1 00:12:09.355 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:09.613 RUN_NIGHTLY=1 16:50:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.613 16:50:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:09.613 16:50:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.613 16:50:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.613 16:50:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.613 16:50:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.613 16:50:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.613 16:50:10 -- paths/export.sh@5 -- $ export PATH 00:12:09.613 16:50:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.613 16:50:10 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:09.613 16:50:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:12:09.613 16:50:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721667010.XXXXXX 00:12:09.613 16:50:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721667010.bqO1iF 00:12:09.613 16:50:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:12:09.613 16:50:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:12:09.613 16:50:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:09.613 16:50:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:09.613 16:50:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:09.613 16:50:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:12:09.613 16:50:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:12:09.613 16:50:11 -- common/autotest_common.sh@10 -- $ set +x 00:12:09.613 16:50:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:12:09.614 16:50:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:12:09.614 16:50:11 -- pm/common@17 -- $ local monitor 00:12:09.614 16:50:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:09.614 16:50:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:09.614 16:50:11 -- pm/common@25 -- $ sleep 1 00:12:09.614 16:50:11 -- pm/common@21 -- $ date +%s 00:12:09.614 16:50:11 -- pm/common@21 -- $ date +%s 00:12:09.614 16:50:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721667011 00:12:09.614 16:50:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721667011 00:12:09.614 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721667011_collect-cpu-load.pm.log 00:12:09.614 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721667011_collect-vmstat.pm.log 00:12:10.568 16:50:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:12:10.568 16:50:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:10.568 16:50:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:10.568 16:50:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:10.568 16:50:12 -- spdk/autobuild.sh@16 -- $ date -u 00:12:10.568 Mon Jul 22 04:50:12 PM UTC 2024 00:12:10.568 16:50:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:10.568 v24.09-pre-297-gf7b31b2b9 00:12:10.568 16:50:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:12:10.568 16:50:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:12:10.568 16:50:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:12:10.568 16:50:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:12:10.568 16:50:12 -- common/autotest_common.sh@10 -- $ set +x 00:12:10.568 ************************************ 00:12:10.568 START TEST asan 00:12:10.568 ************************************ 00:12:10.568 using asan 00:12:10.568 16:50:12 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:12:10.568 00:12:10.568 real 0m0.000s 00:12:10.568 user 0m0.000s 00:12:10.568 sys 0m0.000s 00:12:10.568 16:50:12 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:12:10.568 ************************************ 00:12:10.568 END TEST asan 00:12:10.568 16:50:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:12:10.568 ************************************ 00:12:10.568 16:50:12 -- common/autotest_common.sh@1142 -- $ return 0 00:12:10.568 16:50:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:10.568 16:50:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:10.568 16:50:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:12:10.568 16:50:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:12:10.568 16:50:12 -- common/autotest_common.sh@10 -- $ set +x 00:12:10.568 ************************************ 00:12:10.568 START TEST ubsan 00:12:10.568 ************************************ 00:12:10.568 using ubsan 00:12:10.568 16:50:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:12:10.568 00:12:10.568 real 0m0.000s 00:12:10.568 user 0m0.000s 00:12:10.568 sys 0m0.000s 00:12:10.568 16:50:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:12:10.568 16:50:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:12:10.568 ************************************ 00:12:10.568 END TEST ubsan 00:12:10.568 ************************************ 00:12:10.568 16:50:12 -- common/autotest_common.sh@1142 -- $ return 0 00:12:10.568 16:50:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:10.568 16:50:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:10.568 16:50:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:10.568 16:50:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:10.569 16:50:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:10.569 16:50:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:10.569 16:50:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:10.569 16:50:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:10.569 16:50:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:12:10.833 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:10.833 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:11.399 Using 'verbs' RDMA provider 00:12:27.256 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:39.448 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:39.448 Creating mk/config.mk...done. 00:12:39.448 Creating mk/cc.flags.mk...done. 00:12:39.448 Type 'make' to build. 00:12:39.448 16:50:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:12:39.448 16:50:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:12:39.448 16:50:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:12:39.448 16:50:39 -- common/autotest_common.sh@10 -- $ set +x 00:12:39.448 ************************************ 00:12:39.448 START TEST make 00:12:39.448 ************************************ 00:12:39.448 16:50:39 make -- common/autotest_common.sh@1123 -- $ make -j10 00:12:39.448 make[1]: Nothing to be done for 'all'. 00:12:49.448 The Meson build system 00:12:49.448 Version: 1.3.1 00:12:49.448 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:49.448 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:49.448 Build type: native build 00:12:49.448 Program cat found: YES (/usr/bin/cat) 00:12:49.448 Project name: DPDK 00:12:49.448 Project version: 24.03.0 00:12:49.448 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:12:49.448 C linker for the host machine: cc ld.bfd 2.39-16 00:12:49.448 Host machine cpu family: x86_64 00:12:49.448 Host machine cpu: x86_64 00:12:49.448 Message: ## Building in Developer Mode ## 00:12:49.448 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:49.448 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:49.448 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:49.448 Program python3 found: YES (/usr/bin/python3) 00:12:49.448 Program cat found: YES (/usr/bin/cat) 00:12:49.448 Compiler for C supports arguments -march=native: YES 00:12:49.448 Checking for size of "void *" : 8 00:12:49.448 Checking for size of "void *" : 8 (cached) 00:12:49.448 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:12:49.448 Library m found: YES 00:12:49.448 Library numa found: YES 00:12:49.448 Has header "numaif.h" : YES 00:12:49.448 Library fdt found: NO 00:12:49.448 Library execinfo found: NO 00:12:49.448 Has header "execinfo.h" : YES 00:12:49.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:12:49.448 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:49.448 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:49.448 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:49.448 Run-time dependency openssl found: YES 3.0.9 00:12:49.448 Run-time dependency libpcap found: YES 1.10.4 00:12:49.448 Has header "pcap.h" with dependency libpcap: YES 00:12:49.448 Compiler for C supports arguments -Wcast-qual: YES 00:12:49.448 Compiler for C supports arguments -Wdeprecated: YES 00:12:49.448 Compiler for C supports arguments -Wformat: YES 00:12:49.448 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:49.448 Compiler for C supports arguments -Wformat-security: NO 00:12:49.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:49.448 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:49.448 Compiler for C supports arguments -Wnested-externs: YES 00:12:49.448 Compiler for C supports arguments -Wold-style-definition: YES 00:12:49.448 Compiler for C supports arguments -Wpointer-arith: YES 00:12:49.448 Compiler for C supports arguments -Wsign-compare: YES 00:12:49.448 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:49.448 Compiler for C supports arguments -Wundef: YES 00:12:49.448 Compiler for C supports arguments -Wwrite-strings: YES 00:12:49.448 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:49.448 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:49.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:49.448 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:49.448 Program objdump found: YES (/usr/bin/objdump) 00:12:49.448 Compiler for C supports arguments -mavx512f: YES 00:12:49.448 Checking if "AVX512 checking" compiles: YES 00:12:49.448 Fetching value of define "__SSE4_2__" : 1 00:12:49.448 Fetching value of define "__AES__" : 1 00:12:49.448 Fetching value of define "__AVX__" : 1 00:12:49.448 Fetching value of define "__AVX2__" : 1 00:12:49.448 Fetching value of define "__AVX512BW__" : (undefined) 00:12:49.448 Fetching value of define "__AVX512CD__" : (undefined) 00:12:49.448 Fetching value of define "__AVX512DQ__" : (undefined) 00:12:49.448 Fetching value of define "__AVX512F__" : (undefined) 00:12:49.448 Fetching value of define "__AVX512VL__" : (undefined) 00:12:49.448 Fetching value of define "__PCLMUL__" : 1 00:12:49.448 Fetching value of define "__RDRND__" : 1 00:12:49.448 Fetching value of define "__RDSEED__" : 1 00:12:49.448 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:49.448 Fetching value of define "__znver1__" : (undefined) 00:12:49.448 Fetching value of define "__znver2__" : (undefined) 00:12:49.448 Fetching value of define "__znver3__" : (undefined) 00:12:49.448 Fetching value of define "__znver4__" : (undefined) 00:12:49.448 Library asan found: YES 00:12:49.448 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:49.448 Message: lib/log: Defining dependency "log" 00:12:49.448 Message: lib/kvargs: Defining dependency "kvargs" 00:12:49.448 Message: lib/telemetry: Defining dependency "telemetry" 00:12:49.448 Library rt found: YES 00:12:49.448 Checking for function "getentropy" : NO 00:12:49.448 Message: lib/eal: Defining dependency "eal" 00:12:49.448 Message: lib/ring: Defining dependency "ring" 00:12:49.448 Message: lib/rcu: Defining dependency "rcu" 00:12:49.448 Message: lib/mempool: Defining dependency "mempool" 00:12:49.448 Message: lib/mbuf: Defining dependency "mbuf" 00:12:49.448 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:49.448 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:12:49.448 Compiler for C supports arguments -mpclmul: YES 00:12:49.448 Compiler for C supports arguments -maes: YES 00:12:49.449 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:49.449 Compiler for C supports arguments -mavx512bw: YES 00:12:49.449 Compiler for C supports arguments -mavx512dq: YES 00:12:49.449 Compiler for C supports arguments -mavx512vl: YES 00:12:49.449 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:49.449 Compiler for C supports arguments -mavx2: YES 00:12:49.449 Compiler for C supports arguments -mavx: YES 00:12:49.449 Message: lib/net: Defining dependency "net" 00:12:49.449 Message: lib/meter: Defining dependency "meter" 00:12:49.449 Message: lib/ethdev: Defining dependency "ethdev" 00:12:49.449 Message: lib/pci: Defining dependency "pci" 00:12:49.449 Message: lib/cmdline: Defining dependency "cmdline" 00:12:49.449 Message: lib/hash: Defining dependency "hash" 00:12:49.449 Message: lib/timer: Defining dependency "timer" 00:12:49.449 Message: lib/compressdev: Defining dependency "compressdev" 00:12:49.449 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:49.449 Message: lib/dmadev: Defining dependency "dmadev" 00:12:49.449 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:49.449 Message: lib/power: Defining dependency "power" 00:12:49.449 Message: lib/reorder: Defining dependency "reorder" 00:12:49.449 Message: lib/security: Defining dependency "security" 00:12:49.449 Has header "linux/userfaultfd.h" : YES 00:12:49.449 Has header "linux/vduse.h" : YES 00:12:49.449 Message: lib/vhost: Defining dependency "vhost" 00:12:49.449 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:49.449 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:49.449 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:49.449 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:49.449 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:49.449 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:49.449 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:49.449 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:49.449 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:49.449 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:49.449 Program doxygen found: YES (/usr/bin/doxygen) 00:12:49.449 Configuring doxy-api-html.conf using configuration 00:12:49.449 Configuring doxy-api-man.conf using configuration 00:12:49.449 Program mandb found: YES (/usr/bin/mandb) 00:12:49.449 Program sphinx-build found: NO 00:12:49.449 Configuring rte_build_config.h using configuration 00:12:49.449 Message: 00:12:49.449 ================= 00:12:49.449 Applications Enabled 00:12:49.449 ================= 00:12:49.449 00:12:49.449 apps: 00:12:49.449 00:12:49.449 00:12:49.449 Message: 00:12:49.449 ================= 00:12:49.449 Libraries Enabled 00:12:49.449 ================= 00:12:49.449 00:12:49.449 libs: 00:12:49.449 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:49.449 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:49.449 cryptodev, dmadev, power, reorder, security, vhost, 00:12:49.449 00:12:49.449 Message: 00:12:49.449 =============== 00:12:49.449 Drivers Enabled 00:12:49.449 =============== 00:12:49.449 00:12:49.449 common: 00:12:49.449 00:12:49.449 bus: 00:12:49.449 pci, vdev, 00:12:49.449 mempool: 00:12:49.449 ring, 00:12:49.449 dma: 00:12:49.449 00:12:49.449 net: 00:12:49.449 00:12:49.449 crypto: 00:12:49.449 00:12:49.449 compress: 00:12:49.449 00:12:49.449 vdpa: 00:12:49.449 00:12:49.449 00:12:49.449 Message: 00:12:49.449 ================= 00:12:49.449 Content Skipped 00:12:49.449 ================= 00:12:49.449 00:12:49.449 apps: 00:12:49.449 dumpcap: explicitly disabled via build config 00:12:49.449 graph: explicitly disabled via build config 00:12:49.449 pdump: explicitly disabled via build config 00:12:49.449 proc-info: explicitly disabled via build config 00:12:49.449 test-acl: explicitly disabled via build config 00:12:49.449 test-bbdev: explicitly disabled via build config 00:12:49.449 test-cmdline: explicitly disabled via build config 00:12:49.449 test-compress-perf: explicitly disabled via build config 00:12:49.449 test-crypto-perf: explicitly disabled via build config 00:12:49.449 test-dma-perf: explicitly disabled via build config 00:12:49.449 test-eventdev: explicitly disabled via build config 00:12:49.449 test-fib: explicitly disabled via build config 00:12:49.449 test-flow-perf: explicitly disabled via build config 00:12:49.449 test-gpudev: explicitly disabled via build config 00:12:49.449 test-mldev: explicitly disabled via build config 00:12:49.449 test-pipeline: explicitly disabled via build config 00:12:49.449 test-pmd: explicitly disabled via build config 00:12:49.449 test-regex: explicitly disabled via build config 00:12:49.449 test-sad: explicitly disabled via build config 00:12:49.449 test-security-perf: explicitly disabled via build config 00:12:49.449 00:12:49.449 libs: 00:12:49.449 argparse: explicitly disabled via build config 00:12:49.449 metrics: explicitly disabled via build config 00:12:49.449 acl: explicitly disabled via build config 00:12:49.449 bbdev: explicitly disabled via build config 00:12:49.449 bitratestats: explicitly disabled via build config 00:12:49.449 bpf: explicitly disabled via build config 00:12:49.449 cfgfile: explicitly disabled via build config 00:12:49.449 distributor: explicitly disabled via build config 00:12:49.449 efd: explicitly disabled via build config 00:12:49.449 eventdev: explicitly disabled via build config 00:12:49.449 dispatcher: explicitly disabled via build config 00:12:49.449 gpudev: explicitly disabled via build config 00:12:49.449 gro: explicitly disabled via build config 00:12:49.449 gso: explicitly disabled via build config 00:12:49.449 ip_frag: explicitly disabled via build config 00:12:49.449 jobstats: explicitly disabled via build config 00:12:49.449 latencystats: explicitly disabled via build config 00:12:49.449 lpm: explicitly disabled via build config 00:12:49.449 member: explicitly disabled via build config 00:12:49.449 pcapng: explicitly disabled via build config 00:12:49.449 rawdev: explicitly disabled via build config 00:12:49.449 regexdev: explicitly disabled via build config 00:12:49.449 mldev: explicitly disabled via build config 00:12:49.449 rib: explicitly disabled via build config 00:12:49.449 sched: explicitly disabled via build config 00:12:49.449 stack: explicitly disabled via build config 00:12:49.449 ipsec: explicitly disabled via build config 00:12:49.449 pdcp: explicitly disabled via build config 00:12:49.449 fib: explicitly disabled via build config 00:12:49.449 port: explicitly disabled via build config 00:12:49.449 pdump: explicitly disabled via build config 00:12:49.449 table: explicitly disabled via build config 00:12:49.449 pipeline: explicitly disabled via build config 00:12:49.449 graph: explicitly disabled via build config 00:12:49.449 node: explicitly disabled via build config 00:12:49.449 00:12:49.449 drivers: 00:12:49.449 common/cpt: not in enabled drivers build config 00:12:49.449 common/dpaax: not in enabled drivers build config 00:12:49.449 common/iavf: not in enabled drivers build config 00:12:49.449 common/idpf: not in enabled drivers build config 00:12:49.449 common/ionic: not in enabled drivers build config 00:12:49.449 common/mvep: not in enabled drivers build config 00:12:49.449 common/octeontx: not in enabled drivers build config 00:12:49.449 bus/auxiliary: not in enabled drivers build config 00:12:49.449 bus/cdx: not in enabled drivers build config 00:12:49.449 bus/dpaa: not in enabled drivers build config 00:12:49.449 bus/fslmc: not in enabled drivers build config 00:12:49.449 bus/ifpga: not in enabled drivers build config 00:12:49.449 bus/platform: not in enabled drivers build config 00:12:49.449 bus/uacce: not in enabled drivers build config 00:12:49.449 bus/vmbus: not in enabled drivers build config 00:12:49.449 common/cnxk: not in enabled drivers build config 00:12:49.449 common/mlx5: not in enabled drivers build config 00:12:49.449 common/nfp: not in enabled drivers build config 00:12:49.449 common/nitrox: not in enabled drivers build config 00:12:49.449 common/qat: not in enabled drivers build config 00:12:49.449 common/sfc_efx: not in enabled drivers build config 00:12:49.449 mempool/bucket: not in enabled drivers build config 00:12:49.449 mempool/cnxk: not in enabled drivers build config 00:12:49.449 mempool/dpaa: not in enabled drivers build config 00:12:49.449 mempool/dpaa2: not in enabled drivers build config 00:12:49.449 mempool/octeontx: not in enabled drivers build config 00:12:49.449 mempool/stack: not in enabled drivers build config 00:12:49.449 dma/cnxk: not in enabled drivers build config 00:12:49.449 dma/dpaa: not in enabled drivers build config 00:12:49.449 dma/dpaa2: not in enabled drivers build config 00:12:49.449 dma/hisilicon: not in enabled drivers build config 00:12:49.449 dma/idxd: not in enabled drivers build config 00:12:49.449 dma/ioat: not in enabled drivers build config 00:12:49.449 dma/skeleton: not in enabled drivers build config 00:12:49.449 net/af_packet: not in enabled drivers build config 00:12:49.449 net/af_xdp: not in enabled drivers build config 00:12:49.449 net/ark: not in enabled drivers build config 00:12:49.449 net/atlantic: not in enabled drivers build config 00:12:49.449 net/avp: not in enabled drivers build config 00:12:49.449 net/axgbe: not in enabled drivers build config 00:12:49.449 net/bnx2x: not in enabled drivers build config 00:12:49.449 net/bnxt: not in enabled drivers build config 00:12:49.449 net/bonding: not in enabled drivers build config 00:12:49.449 net/cnxk: not in enabled drivers build config 00:12:49.449 net/cpfl: not in enabled drivers build config 00:12:49.449 net/cxgbe: not in enabled drivers build config 00:12:49.450 net/dpaa: not in enabled drivers build config 00:12:49.450 net/dpaa2: not in enabled drivers build config 00:12:49.450 net/e1000: not in enabled drivers build config 00:12:49.450 net/ena: not in enabled drivers build config 00:12:49.450 net/enetc: not in enabled drivers build config 00:12:49.450 net/enetfec: not in enabled drivers build config 00:12:49.450 net/enic: not in enabled drivers build config 00:12:49.450 net/failsafe: not in enabled drivers build config 00:12:49.450 net/fm10k: not in enabled drivers build config 00:12:49.450 net/gve: not in enabled drivers build config 00:12:49.450 net/hinic: not in enabled drivers build config 00:12:49.450 net/hns3: not in enabled drivers build config 00:12:49.450 net/i40e: not in enabled drivers build config 00:12:49.450 net/iavf: not in enabled drivers build config 00:12:49.450 net/ice: not in enabled drivers build config 00:12:49.450 net/idpf: not in enabled drivers build config 00:12:49.450 net/igc: not in enabled drivers build config 00:12:49.450 net/ionic: not in enabled drivers build config 00:12:49.450 net/ipn3ke: not in enabled drivers build config 00:12:49.450 net/ixgbe: not in enabled drivers build config 00:12:49.450 net/mana: not in enabled drivers build config 00:12:49.450 net/memif: not in enabled drivers build config 00:12:49.450 net/mlx4: not in enabled drivers build config 00:12:49.450 net/mlx5: not in enabled drivers build config 00:12:49.450 net/mvneta: not in enabled drivers build config 00:12:49.450 net/mvpp2: not in enabled drivers build config 00:12:49.450 net/netvsc: not in enabled drivers build config 00:12:49.450 net/nfb: not in enabled drivers build config 00:12:49.450 net/nfp: not in enabled drivers build config 00:12:49.450 net/ngbe: not in enabled drivers build config 00:12:49.450 net/null: not in enabled drivers build config 00:12:49.450 net/octeontx: not in enabled drivers build config 00:12:49.450 net/octeon_ep: not in enabled drivers build config 00:12:49.450 net/pcap: not in enabled drivers build config 00:12:49.450 net/pfe: not in enabled drivers build config 00:12:49.450 net/qede: not in enabled drivers build config 00:12:49.450 net/ring: not in enabled drivers build config 00:12:49.450 net/sfc: not in enabled drivers build config 00:12:49.450 net/softnic: not in enabled drivers build config 00:12:49.450 net/tap: not in enabled drivers build config 00:12:49.450 net/thunderx: not in enabled drivers build config 00:12:49.450 net/txgbe: not in enabled drivers build config 00:12:49.450 net/vdev_netvsc: not in enabled drivers build config 00:12:49.450 net/vhost: not in enabled drivers build config 00:12:49.450 net/virtio: not in enabled drivers build config 00:12:49.450 net/vmxnet3: not in enabled drivers build config 00:12:49.450 raw/*: missing internal dependency, "rawdev" 00:12:49.450 crypto/armv8: not in enabled drivers build config 00:12:49.450 crypto/bcmfs: not in enabled drivers build config 00:12:49.450 crypto/caam_jr: not in enabled drivers build config 00:12:49.450 crypto/ccp: not in enabled drivers build config 00:12:49.450 crypto/cnxk: not in enabled drivers build config 00:12:49.450 crypto/dpaa_sec: not in enabled drivers build config 00:12:49.450 crypto/dpaa2_sec: not in enabled drivers build config 00:12:49.450 crypto/ipsec_mb: not in enabled drivers build config 00:12:49.450 crypto/mlx5: not in enabled drivers build config 00:12:49.450 crypto/mvsam: not in enabled drivers build config 00:12:49.450 crypto/nitrox: not in enabled drivers build config 00:12:49.450 crypto/null: not in enabled drivers build config 00:12:49.450 crypto/octeontx: not in enabled drivers build config 00:12:49.450 crypto/openssl: not in enabled drivers build config 00:12:49.450 crypto/scheduler: not in enabled drivers build config 00:12:49.450 crypto/uadk: not in enabled drivers build config 00:12:49.450 crypto/virtio: not in enabled drivers build config 00:12:49.450 compress/isal: not in enabled drivers build config 00:12:49.450 compress/mlx5: not in enabled drivers build config 00:12:49.450 compress/nitrox: not in enabled drivers build config 00:12:49.450 compress/octeontx: not in enabled drivers build config 00:12:49.450 compress/zlib: not in enabled drivers build config 00:12:49.450 regex/*: missing internal dependency, "regexdev" 00:12:49.450 ml/*: missing internal dependency, "mldev" 00:12:49.450 vdpa/ifc: not in enabled drivers build config 00:12:49.450 vdpa/mlx5: not in enabled drivers build config 00:12:49.450 vdpa/nfp: not in enabled drivers build config 00:12:49.450 vdpa/sfc: not in enabled drivers build config 00:12:49.450 event/*: missing internal dependency, "eventdev" 00:12:49.450 baseband/*: missing internal dependency, "bbdev" 00:12:49.450 gpu/*: missing internal dependency, "gpudev" 00:12:49.450 00:12:49.450 00:12:50.015 Build targets in project: 85 00:12:50.015 00:12:50.015 DPDK 24.03.0 00:12:50.015 00:12:50.015 User defined options 00:12:50.015 buildtype : debug 00:12:50.015 default_library : shared 00:12:50.015 libdir : lib 00:12:50.015 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:50.015 b_sanitize : address 00:12:50.015 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:50.015 c_link_args : 00:12:50.015 cpu_instruction_set: native 00:12:50.015 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:50.015 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:50.015 enable_docs : false 00:12:50.015 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:50.015 enable_kmods : false 00:12:50.015 max_lcores : 128 00:12:50.015 tests : false 00:12:50.015 00:12:50.015 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:50.273 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:50.531 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:50.531 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:50.531 [3/268] Linking static target lib/librte_kvargs.a 00:12:50.531 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:50.531 [5/268] Linking static target lib/librte_log.a 00:12:50.531 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:51.097 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:51.097 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:51.355 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:51.355 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:51.355 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:51.355 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:51.355 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:51.355 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:51.355 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:51.355 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:51.612 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:51.612 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:51.612 [19/268] Linking static target lib/librte_telemetry.a 00:12:51.612 [20/268] Linking target lib/librte_log.so.24.1 00:12:51.869 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:12:51.869 [22/268] Linking target lib/librte_kvargs.so.24.1 00:12:52.127 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:52.127 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:12:52.385 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:52.385 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:52.385 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:52.385 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:52.385 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:52.385 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:52.385 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:52.643 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:52.643 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:52.643 [34/268] Linking target lib/librte_telemetry.so.24.1 00:12:52.643 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:52.901 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:12:52.901 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:53.158 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:53.158 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:53.417 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:53.417 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:53.417 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:53.417 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:53.417 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:53.417 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:53.705 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:53.705 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:53.705 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:53.705 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:53.705 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:53.964 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:53.964 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:54.224 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:54.486 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:54.486 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:54.486 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:54.486 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:54.486 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:54.486 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:54.744 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:54.744 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:55.001 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:55.001 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:55.001 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:55.258 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:55.515 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:55.515 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:55.515 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:55.515 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:55.773 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:55.773 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:55.773 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:56.029 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:56.029 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:56.029 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:56.286 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:56.286 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:56.286 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:56.286 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:56.544 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:56.544 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:56.544 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:56.801 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:56.801 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:56.801 [85/268] Linking static target lib/librte_eal.a 00:12:56.801 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:57.059 [87/268] Linking static target lib/librte_ring.a 00:12:57.059 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:57.059 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:57.317 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:57.317 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:57.317 [92/268] Linking static target lib/librte_mempool.a 00:12:57.317 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:57.574 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:57.574 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:57.574 [96/268] Linking static target lib/librte_rcu.a 00:12:57.574 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:57.831 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:57.831 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:57.831 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:58.086 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.086 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:58.344 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:58.344 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:58.344 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:58.344 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:58.344 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:58.344 [108/268] Linking static target lib/librte_mbuf.a 00:12:58.344 [109/268] Linking static target lib/librte_net.a 00:12:58.600 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:58.600 [111/268] Linking static target lib/librte_meter.a 00:12:58.859 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.859 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.859 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:59.117 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:59.117 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:59.117 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:59.117 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:59.376 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:59.634 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:59.892 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:00.149 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:00.149 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:00.406 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:00.406 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:00.406 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:00.406 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:00.406 [128/268] Linking static target lib/librte_pci.a 00:13:00.407 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:00.407 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:00.664 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:00.664 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:00.664 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:00.664 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:00.921 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:00.921 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:13:00.921 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:00.921 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:00.921 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:00.921 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:00.921 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:00.921 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:00.921 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:01.179 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:13:01.179 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:01.179 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:01.179 [147/268] Linking static target lib/librte_cmdline.a 00:13:01.436 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:01.437 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:01.694 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:01.952 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:01.952 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:01.952 [153/268] Linking static target lib/librte_ethdev.a 00:13:01.952 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:01.952 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:01.952 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:01.952 [157/268] Linking static target lib/librte_timer.a 00:13:02.517 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:02.517 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:02.517 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:02.774 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:02.774 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:02.774 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:02.774 [164/268] Linking static target lib/librte_compressdev.a 00:13:02.774 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:02.774 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.032 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:03.032 [168/268] Linking static target lib/librte_hash.a 00:13:03.032 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:03.032 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:03.032 [171/268] Linking static target lib/librte_dmadev.a 00:13:03.289 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:03.289 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:03.547 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:03.547 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.547 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:03.804 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.804 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:03.804 [179/268] Linking static target lib/librte_cryptodev.a 00:13:03.804 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.804 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:04.082 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:04.082 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:04.082 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:04.082 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:04.340 [186/268] Linking static target lib/librte_power.a 00:13:04.340 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:04.340 [188/268] Linking static target lib/librte_reorder.a 00:13:04.598 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:04.598 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:04.598 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:04.598 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:04.598 [193/268] Linking static target lib/librte_security.a 00:13:04.856 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.114 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:05.114 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.114 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.372 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:05.631 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:05.631 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.631 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:05.631 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:05.631 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:05.888 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:05.888 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:06.146 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:06.146 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:06.146 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:06.146 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:06.405 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:06.405 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:06.405 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:06.405 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:06.405 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:06.405 [215/268] Linking static target drivers/librte_bus_vdev.a 00:13:06.663 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:06.663 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:06.663 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:06.663 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:06.663 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:06.663 [221/268] Linking static target drivers/librte_bus_pci.a 00:13:06.663 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:06.921 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:06.921 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:06.921 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:06.921 [226/268] Linking static target drivers/librte_mempool_ring.a 00:13:07.179 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.437 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.437 [229/268] Linking target lib/librte_eal.so.24.1 00:13:07.695 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:13:07.695 [231/268] Linking target lib/librte_timer.so.24.1 00:13:07.695 [232/268] Linking target lib/librte_meter.so.24.1 00:13:07.695 [233/268] Linking target lib/librte_pci.so.24.1 00:13:07.695 [234/268] Linking target lib/librte_ring.so.24.1 00:13:07.695 [235/268] Linking target lib/librte_dmadev.so.24.1 00:13:07.695 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:13:07.954 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:13:07.954 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:13:07.954 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:13:07.954 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:13:07.954 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:13:07.954 [242/268] Linking target lib/librte_rcu.so.24.1 00:13:07.954 [243/268] Linking target lib/librte_mempool.so.24.1 00:13:07.954 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:13:08.213 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:13:08.213 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:13:08.213 [247/268] Linking target lib/librte_mbuf.so.24.1 00:13:08.213 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:13:08.213 [249/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:08.213 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:13:08.471 [251/268] Linking target lib/librte_compressdev.so.24.1 00:13:08.471 [252/268] Linking target lib/librte_reorder.so.24.1 00:13:08.471 [253/268] Linking target lib/librte_net.so.24.1 00:13:08.471 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:13:08.471 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:13:08.471 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:13:08.471 [257/268] Linking target lib/librte_security.so.24.1 00:13:08.471 [258/268] Linking target lib/librte_hash.so.24.1 00:13:08.471 [259/268] Linking target lib/librte_cmdline.so.24.1 00:13:08.729 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:13:09.295 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:09.295 [262/268] Linking target lib/librte_ethdev.so.24.1 00:13:09.295 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:13:09.553 [264/268] Linking target lib/librte_power.so.24.1 00:13:12.083 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:12.083 [266/268] Linking static target lib/librte_vhost.a 00:13:14.017 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.017 [268/268] Linking target lib/librte_vhost.so.24.1 00:13:14.017 INFO: autodetecting backend as ninja 00:13:14.017 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:14.951 CC lib/log/log.o 00:13:14.951 CC lib/log/log_flags.o 00:13:14.951 CC lib/log/log_deprecated.o 00:13:14.951 CC lib/ut/ut.o 00:13:14.951 CC lib/ut_mock/mock.o 00:13:15.209 LIB libspdk_log.a 00:13:15.209 LIB libspdk_ut.a 00:13:15.209 LIB libspdk_ut_mock.a 00:13:15.209 SO libspdk_log.so.7.0 00:13:15.209 SO libspdk_ut.so.2.0 00:13:15.209 SO libspdk_ut_mock.so.6.0 00:13:15.209 SYMLINK libspdk_log.so 00:13:15.467 SYMLINK libspdk_ut.so 00:13:15.467 SYMLINK libspdk_ut_mock.so 00:13:15.467 CC lib/dma/dma.o 00:13:15.467 CC lib/ioat/ioat.o 00:13:15.467 CC lib/util/base64.o 00:13:15.467 CC lib/util/bit_array.o 00:13:15.467 CC lib/util/crc16.o 00:13:15.467 CC lib/util/cpuset.o 00:13:15.467 CC lib/util/crc32.o 00:13:15.467 CC lib/util/crc32c.o 00:13:15.467 CXX lib/trace_parser/trace.o 00:13:15.725 CC lib/vfio_user/host/vfio_user_pci.o 00:13:15.725 CC lib/util/crc32_ieee.o 00:13:15.725 CC lib/util/crc64.o 00:13:15.725 CC lib/vfio_user/host/vfio_user.o 00:13:15.725 CC lib/util/dif.o 00:13:15.725 LIB libspdk_dma.a 00:13:15.725 CC lib/util/fd.o 00:13:15.725 CC lib/util/fd_group.o 00:13:15.725 SO libspdk_dma.so.4.0 00:13:15.983 CC lib/util/file.o 00:13:15.983 SYMLINK libspdk_dma.so 00:13:15.983 CC lib/util/hexlify.o 00:13:15.983 CC lib/util/iov.o 00:13:15.983 LIB libspdk_ioat.a 00:13:15.983 CC lib/util/math.o 00:13:15.983 SO libspdk_ioat.so.7.0 00:13:15.983 CC lib/util/net.o 00:13:15.983 LIB libspdk_vfio_user.a 00:13:15.983 SYMLINK libspdk_ioat.so 00:13:15.983 CC lib/util/pipe.o 00:13:15.983 CC lib/util/strerror_tls.o 00:13:15.983 SO libspdk_vfio_user.so.5.0 00:13:15.983 CC lib/util/string.o 00:13:16.240 SYMLINK libspdk_vfio_user.so 00:13:16.240 CC lib/util/uuid.o 00:13:16.240 CC lib/util/xor.o 00:13:16.240 CC lib/util/zipf.o 00:13:16.498 LIB libspdk_util.a 00:13:16.756 SO libspdk_util.so.10.0 00:13:16.756 LIB libspdk_trace_parser.a 00:13:16.756 SO libspdk_trace_parser.so.5.0 00:13:16.756 SYMLINK libspdk_util.so 00:13:16.756 SYMLINK libspdk_trace_parser.so 00:13:17.013 CC lib/json/json_parse.o 00:13:17.013 CC lib/vmd/vmd.o 00:13:17.013 CC lib/vmd/led.o 00:13:17.013 CC lib/json/json_util.o 00:13:17.013 CC lib/conf/conf.o 00:13:17.013 CC lib/rdma_provider/common.o 00:13:17.013 CC lib/json/json_write.o 00:13:17.013 CC lib/idxd/idxd.o 00:13:17.013 CC lib/rdma_utils/rdma_utils.o 00:13:17.013 CC lib/env_dpdk/env.o 00:13:17.270 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:17.270 CC lib/idxd/idxd_user.o 00:13:17.270 LIB libspdk_conf.a 00:13:17.270 CC lib/idxd/idxd_kernel.o 00:13:17.270 SO libspdk_conf.so.6.0 00:13:17.270 CC lib/env_dpdk/memory.o 00:13:17.270 LIB libspdk_rdma_utils.a 00:13:17.270 LIB libspdk_json.a 00:13:17.270 SYMLINK libspdk_conf.so 00:13:17.270 CC lib/env_dpdk/pci.o 00:13:17.270 SO libspdk_rdma_utils.so.1.0 00:13:17.270 LIB libspdk_rdma_provider.a 00:13:17.270 SO libspdk_json.so.6.0 00:13:17.529 SO libspdk_rdma_provider.so.6.0 00:13:17.529 SYMLINK libspdk_rdma_utils.so 00:13:17.529 CC lib/env_dpdk/init.o 00:13:17.529 CC lib/env_dpdk/threads.o 00:13:17.529 SYMLINK libspdk_json.so 00:13:17.529 CC lib/env_dpdk/pci_ioat.o 00:13:17.529 SYMLINK libspdk_rdma_provider.so 00:13:17.529 CC lib/env_dpdk/pci_virtio.o 00:13:17.529 CC lib/env_dpdk/pci_vmd.o 00:13:17.529 CC lib/env_dpdk/pci_idxd.o 00:13:17.529 CC lib/jsonrpc/jsonrpc_server.o 00:13:17.787 CC lib/env_dpdk/pci_event.o 00:13:17.787 CC lib/env_dpdk/sigbus_handler.o 00:13:17.787 CC lib/env_dpdk/pci_dpdk.o 00:13:17.787 LIB libspdk_idxd.a 00:13:17.787 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:17.787 SO libspdk_idxd.so.12.0 00:13:17.787 LIB libspdk_vmd.a 00:13:17.787 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:17.787 CC lib/jsonrpc/jsonrpc_client.o 00:13:17.787 SO libspdk_vmd.so.6.0 00:13:17.787 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:17.787 SYMLINK libspdk_idxd.so 00:13:17.787 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:18.045 SYMLINK libspdk_vmd.so 00:13:18.045 LIB libspdk_jsonrpc.a 00:13:18.302 SO libspdk_jsonrpc.so.6.0 00:13:18.302 SYMLINK libspdk_jsonrpc.so 00:13:18.560 CC lib/rpc/rpc.o 00:13:18.818 LIB libspdk_rpc.a 00:13:18.818 LIB libspdk_env_dpdk.a 00:13:18.818 SO libspdk_rpc.so.6.0 00:13:18.818 SO libspdk_env_dpdk.so.15.0 00:13:18.818 SYMLINK libspdk_rpc.so 00:13:19.075 SYMLINK libspdk_env_dpdk.so 00:13:19.075 CC lib/trace/trace.o 00:13:19.075 CC lib/trace/trace_flags.o 00:13:19.075 CC lib/trace/trace_rpc.o 00:13:19.075 CC lib/keyring/keyring.o 00:13:19.075 CC lib/keyring/keyring_rpc.o 00:13:19.075 CC lib/notify/notify.o 00:13:19.075 CC lib/notify/notify_rpc.o 00:13:19.333 LIB libspdk_notify.a 00:13:19.333 SO libspdk_notify.so.6.0 00:13:19.333 SYMLINK libspdk_notify.so 00:13:19.333 LIB libspdk_trace.a 00:13:19.590 LIB libspdk_keyring.a 00:13:19.590 SO libspdk_trace.so.10.0 00:13:19.590 SO libspdk_keyring.so.1.0 00:13:19.590 SYMLINK libspdk_trace.so 00:13:19.590 SYMLINK libspdk_keyring.so 00:13:19.848 CC lib/thread/thread.o 00:13:19.848 CC lib/thread/iobuf.o 00:13:19.848 CC lib/sock/sock.o 00:13:19.848 CC lib/sock/sock_rpc.o 00:13:20.413 LIB libspdk_sock.a 00:13:20.413 SO libspdk_sock.so.10.0 00:13:20.670 SYMLINK libspdk_sock.so 00:13:20.928 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:20.928 CC lib/nvme/nvme_ctrlr.o 00:13:20.928 CC lib/nvme/nvme_ns_cmd.o 00:13:20.928 CC lib/nvme/nvme_fabric.o 00:13:20.928 CC lib/nvme/nvme_pcie_common.o 00:13:20.928 CC lib/nvme/nvme_ns.o 00:13:20.928 CC lib/nvme/nvme_pcie.o 00:13:20.928 CC lib/nvme/nvme.o 00:13:20.928 CC lib/nvme/nvme_qpair.o 00:13:21.863 CC lib/nvme/nvme_quirks.o 00:13:21.863 CC lib/nvme/nvme_transport.o 00:13:21.863 CC lib/nvme/nvme_discovery.o 00:13:21.863 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:21.863 LIB libspdk_thread.a 00:13:21.863 SO libspdk_thread.so.10.1 00:13:21.863 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:22.122 SYMLINK libspdk_thread.so 00:13:22.122 CC lib/nvme/nvme_tcp.o 00:13:22.122 CC lib/nvme/nvme_opal.o 00:13:22.122 CC lib/accel/accel.o 00:13:22.122 CC lib/nvme/nvme_io_msg.o 00:13:22.381 CC lib/accel/accel_rpc.o 00:13:22.381 CC lib/accel/accel_sw.o 00:13:22.381 CC lib/nvme/nvme_poll_group.o 00:13:22.639 CC lib/nvme/nvme_zns.o 00:13:22.639 CC lib/nvme/nvme_stubs.o 00:13:22.639 CC lib/nvme/nvme_auth.o 00:13:22.639 CC lib/nvme/nvme_cuse.o 00:13:22.639 CC lib/nvme/nvme_rdma.o 00:13:23.203 CC lib/blob/blobstore.o 00:13:23.203 CC lib/blob/request.o 00:13:23.203 CC lib/init/json_config.o 00:13:23.460 CC lib/virtio/virtio.o 00:13:23.460 LIB libspdk_accel.a 00:13:23.460 SO libspdk_accel.so.16.0 00:13:23.460 CC lib/init/subsystem.o 00:13:23.460 CC lib/init/subsystem_rpc.o 00:13:23.717 SYMLINK libspdk_accel.so 00:13:23.717 CC lib/init/rpc.o 00:13:23.717 CC lib/blob/zeroes.o 00:13:23.717 CC lib/virtio/virtio_vhost_user.o 00:13:23.717 CC lib/virtio/virtio_vfio_user.o 00:13:23.717 CC lib/blob/blob_bs_dev.o 00:13:23.717 LIB libspdk_init.a 00:13:23.976 CC lib/virtio/virtio_pci.o 00:13:23.976 SO libspdk_init.so.5.0 00:13:23.976 CC lib/bdev/bdev.o 00:13:23.976 CC lib/bdev/bdev_rpc.o 00:13:23.976 CC lib/bdev/bdev_zone.o 00:13:23.976 SYMLINK libspdk_init.so 00:13:23.976 CC lib/bdev/part.o 00:13:23.976 CC lib/bdev/scsi_nvme.o 00:13:24.234 LIB libspdk_virtio.a 00:13:24.234 CC lib/event/reactor.o 00:13:24.234 CC lib/event/app.o 00:13:24.234 CC lib/event/log_rpc.o 00:13:24.234 SO libspdk_virtio.so.7.0 00:13:24.234 CC lib/event/app_rpc.o 00:13:24.234 CC lib/event/scheduler_static.o 00:13:24.234 SYMLINK libspdk_virtio.so 00:13:24.492 LIB libspdk_nvme.a 00:13:24.812 SO libspdk_nvme.so.13.1 00:13:24.812 LIB libspdk_event.a 00:13:24.812 SO libspdk_event.so.14.0 00:13:25.070 SYMLINK libspdk_event.so 00:13:25.070 SYMLINK libspdk_nvme.so 00:13:27.599 LIB libspdk_blob.a 00:13:27.599 LIB libspdk_bdev.a 00:13:27.599 SO libspdk_blob.so.11.0 00:13:27.599 SO libspdk_bdev.so.16.0 00:13:27.599 SYMLINK libspdk_blob.so 00:13:27.599 SYMLINK libspdk_bdev.so 00:13:27.599 CC lib/blobfs/blobfs.o 00:13:27.599 CC lib/blobfs/tree.o 00:13:27.599 CC lib/lvol/lvol.o 00:13:27.856 CC lib/scsi/dev.o 00:13:27.856 CC lib/scsi/lun.o 00:13:27.856 CC lib/ftl/ftl_core.o 00:13:27.856 CC lib/nvmf/ctrlr.o 00:13:27.856 CC lib/nvmf/ctrlr_discovery.o 00:13:27.856 CC lib/ublk/ublk.o 00:13:27.856 CC lib/nbd/nbd.o 00:13:27.856 CC lib/nvmf/ctrlr_bdev.o 00:13:28.115 CC lib/scsi/port.o 00:13:28.115 CC lib/scsi/scsi.o 00:13:28.115 CC lib/nvmf/subsystem.o 00:13:28.373 CC lib/ftl/ftl_init.o 00:13:28.373 CC lib/scsi/scsi_bdev.o 00:13:28.373 CC lib/nbd/nbd_rpc.o 00:13:28.373 CC lib/nvmf/nvmf.o 00:13:28.632 CC lib/ftl/ftl_layout.o 00:13:28.632 LIB libspdk_nbd.a 00:13:28.632 SO libspdk_nbd.so.7.0 00:13:28.632 CC lib/ublk/ublk_rpc.o 00:13:28.632 SYMLINK libspdk_nbd.so 00:13:28.632 CC lib/ftl/ftl_debug.o 00:13:28.890 CC lib/nvmf/nvmf_rpc.o 00:13:28.890 LIB libspdk_blobfs.a 00:13:28.890 SO libspdk_blobfs.so.10.0 00:13:28.890 LIB libspdk_ublk.a 00:13:28.890 CC lib/ftl/ftl_io.o 00:13:28.890 SO libspdk_ublk.so.3.0 00:13:28.890 LIB libspdk_lvol.a 00:13:28.890 CC lib/scsi/scsi_pr.o 00:13:28.890 CC lib/scsi/scsi_rpc.o 00:13:28.890 SO libspdk_lvol.so.10.0 00:13:28.890 SYMLINK libspdk_blobfs.so 00:13:28.890 CC lib/ftl/ftl_sb.o 00:13:28.890 SYMLINK libspdk_ublk.so 00:13:28.890 CC lib/ftl/ftl_l2p.o 00:13:29.149 SYMLINK libspdk_lvol.so 00:13:29.149 CC lib/ftl/ftl_l2p_flat.o 00:13:29.149 CC lib/scsi/task.o 00:13:29.149 CC lib/ftl/ftl_nv_cache.o 00:13:29.149 CC lib/ftl/ftl_band.o 00:13:29.149 CC lib/nvmf/transport.o 00:13:29.149 CC lib/ftl/ftl_band_ops.o 00:13:29.408 CC lib/nvmf/tcp.o 00:13:29.408 LIB libspdk_scsi.a 00:13:29.408 SO libspdk_scsi.so.9.0 00:13:29.408 CC lib/ftl/ftl_writer.o 00:13:29.666 SYMLINK libspdk_scsi.so 00:13:29.666 CC lib/ftl/ftl_rq.o 00:13:29.666 CC lib/ftl/ftl_reloc.o 00:13:29.666 CC lib/ftl/ftl_l2p_cache.o 00:13:29.666 CC lib/nvmf/stubs.o 00:13:29.666 CC lib/nvmf/mdns_server.o 00:13:29.924 CC lib/ftl/ftl_p2l.o 00:13:29.924 CC lib/ftl/mngt/ftl_mngt.o 00:13:29.924 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:30.182 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:30.182 CC lib/iscsi/conn.o 00:13:30.182 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:30.182 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:30.182 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:30.182 CC lib/iscsi/init_grp.o 00:13:30.182 CC lib/nvmf/rdma.o 00:13:30.440 CC lib/nvmf/auth.o 00:13:30.440 CC lib/iscsi/iscsi.o 00:13:30.440 CC lib/vhost/vhost.o 00:13:30.440 CC lib/vhost/vhost_rpc.o 00:13:30.440 CC lib/vhost/vhost_scsi.o 00:13:30.698 CC lib/iscsi/md5.o 00:13:30.698 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:30.698 CC lib/iscsi/param.o 00:13:30.956 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:30.956 CC lib/iscsi/portal_grp.o 00:13:30.956 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:31.215 CC lib/iscsi/tgt_node.o 00:13:31.215 CC lib/vhost/vhost_blk.o 00:13:31.215 CC lib/vhost/rte_vhost_user.o 00:13:31.215 CC lib/iscsi/iscsi_subsystem.o 00:13:31.473 CC lib/iscsi/iscsi_rpc.o 00:13:31.473 CC lib/iscsi/task.o 00:13:31.473 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:31.731 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:31.731 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:31.731 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:31.731 CC lib/ftl/utils/ftl_conf.o 00:13:31.731 CC lib/ftl/utils/ftl_md.o 00:13:31.731 CC lib/ftl/utils/ftl_mempool.o 00:13:31.988 CC lib/ftl/utils/ftl_bitmap.o 00:13:31.988 CC lib/ftl/utils/ftl_property.o 00:13:31.988 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:31.988 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:31.988 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:32.246 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:32.246 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:32.246 LIB libspdk_iscsi.a 00:13:32.246 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:32.246 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:13:32.246 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:32.246 SO libspdk_iscsi.so.8.0 00:13:32.246 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:32.502 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:32.502 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:32.502 CC lib/ftl/base/ftl_base_dev.o 00:13:32.502 CC lib/ftl/base/ftl_base_bdev.o 00:13:32.502 LIB libspdk_vhost.a 00:13:32.502 CC lib/ftl/ftl_trace.o 00:13:32.502 SYMLINK libspdk_iscsi.so 00:13:32.502 SO libspdk_vhost.so.8.0 00:13:32.759 SYMLINK libspdk_vhost.so 00:13:32.759 LIB libspdk_ftl.a 00:13:33.016 LIB libspdk_nvmf.a 00:13:33.016 SO libspdk_ftl.so.9.0 00:13:33.273 SO libspdk_nvmf.so.19.0 00:13:33.532 SYMLINK libspdk_ftl.so 00:13:33.532 SYMLINK libspdk_nvmf.so 00:13:33.790 CC module/env_dpdk/env_dpdk_rpc.o 00:13:34.048 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:34.048 CC module/accel/dsa/accel_dsa.o 00:13:34.048 CC module/accel/ioat/accel_ioat.o 00:13:34.048 CC module/scheduler/gscheduler/gscheduler.o 00:13:34.048 CC module/sock/posix/posix.o 00:13:34.048 CC module/accel/error/accel_error.o 00:13:34.048 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:34.048 CC module/blob/bdev/blob_bdev.o 00:13:34.048 CC module/keyring/file/keyring.o 00:13:34.048 LIB libspdk_env_dpdk_rpc.a 00:13:34.048 SO libspdk_env_dpdk_rpc.so.6.0 00:13:34.048 LIB libspdk_scheduler_dpdk_governor.a 00:13:34.048 LIB libspdk_scheduler_gscheduler.a 00:13:34.048 SYMLINK libspdk_env_dpdk_rpc.so 00:13:34.048 CC module/keyring/file/keyring_rpc.o 00:13:34.048 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:34.048 SO libspdk_scheduler_gscheduler.so.4.0 00:13:34.048 CC module/accel/error/accel_error_rpc.o 00:13:34.305 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:34.305 LIB libspdk_scheduler_dynamic.a 00:13:34.305 SYMLINK libspdk_scheduler_gscheduler.so 00:13:34.305 CC module/accel/ioat/accel_ioat_rpc.o 00:13:34.305 CC module/accel/dsa/accel_dsa_rpc.o 00:13:34.305 SO libspdk_scheduler_dynamic.so.4.0 00:13:34.305 LIB libspdk_keyring_file.a 00:13:34.305 LIB libspdk_blob_bdev.a 00:13:34.305 SYMLINK libspdk_scheduler_dynamic.so 00:13:34.305 SO libspdk_keyring_file.so.1.0 00:13:34.305 SO libspdk_blob_bdev.so.11.0 00:13:34.305 CC module/keyring/linux/keyring.o 00:13:34.305 CC module/keyring/linux/keyring_rpc.o 00:13:34.305 LIB libspdk_accel_error.a 00:13:34.305 LIB libspdk_accel_ioat.a 00:13:34.305 CC module/accel/iaa/accel_iaa.o 00:13:34.305 SYMLINK libspdk_keyring_file.so 00:13:34.305 CC module/accel/iaa/accel_iaa_rpc.o 00:13:34.305 LIB libspdk_accel_dsa.a 00:13:34.305 SYMLINK libspdk_blob_bdev.so 00:13:34.305 SO libspdk_accel_ioat.so.6.0 00:13:34.305 SO libspdk_accel_error.so.2.0 00:13:34.305 SO libspdk_accel_dsa.so.5.0 00:13:34.562 SYMLINK libspdk_accel_ioat.so 00:13:34.562 SYMLINK libspdk_accel_error.so 00:13:34.562 SYMLINK libspdk_accel_dsa.so 00:13:34.562 LIB libspdk_keyring_linux.a 00:13:34.562 SO libspdk_keyring_linux.so.1.0 00:13:34.562 LIB libspdk_accel_iaa.a 00:13:34.562 SYMLINK libspdk_keyring_linux.so 00:13:34.562 SO libspdk_accel_iaa.so.3.0 00:13:34.562 CC module/bdev/delay/vbdev_delay.o 00:13:34.828 CC module/bdev/error/vbdev_error.o 00:13:34.828 CC module/blobfs/bdev/blobfs_bdev.o 00:13:34.828 CC module/bdev/malloc/bdev_malloc.o 00:13:34.828 CC module/bdev/lvol/vbdev_lvol.o 00:13:34.828 CC module/bdev/null/bdev_null.o 00:13:34.828 CC module/bdev/gpt/gpt.o 00:13:34.828 SYMLINK libspdk_accel_iaa.so 00:13:34.828 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:34.828 CC module/bdev/nvme/bdev_nvme.o 00:13:34.828 LIB libspdk_sock_posix.a 00:13:34.828 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:34.828 SO libspdk_sock_posix.so.6.0 00:13:34.828 CC module/bdev/gpt/vbdev_gpt.o 00:13:35.098 SYMLINK libspdk_sock_posix.so 00:13:35.098 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:35.098 CC module/bdev/null/bdev_null_rpc.o 00:13:35.098 CC module/bdev/error/vbdev_error_rpc.o 00:13:35.098 LIB libspdk_blobfs_bdev.a 00:13:35.098 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:35.098 SO libspdk_blobfs_bdev.so.6.0 00:13:35.098 LIB libspdk_bdev_null.a 00:13:35.098 LIB libspdk_bdev_delay.a 00:13:35.098 LIB libspdk_bdev_error.a 00:13:35.355 SO libspdk_bdev_null.so.6.0 00:13:35.355 SYMLINK libspdk_blobfs_bdev.so 00:13:35.355 LIB libspdk_bdev_gpt.a 00:13:35.355 SO libspdk_bdev_error.so.6.0 00:13:35.355 SO libspdk_bdev_delay.so.6.0 00:13:35.355 SO libspdk_bdev_gpt.so.6.0 00:13:35.355 LIB libspdk_bdev_lvol.a 00:13:35.355 SYMLINK libspdk_bdev_null.so 00:13:35.355 SYMLINK libspdk_bdev_error.so 00:13:35.355 SYMLINK libspdk_bdev_delay.so 00:13:35.355 LIB libspdk_bdev_malloc.a 00:13:35.355 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:35.355 SYMLINK libspdk_bdev_gpt.so 00:13:35.355 SO libspdk_bdev_lvol.so.6.0 00:13:35.355 CC module/bdev/raid/bdev_raid.o 00:13:35.355 CC module/bdev/passthru/vbdev_passthru.o 00:13:35.355 SO libspdk_bdev_malloc.so.6.0 00:13:35.355 CC module/bdev/split/vbdev_split.o 00:13:35.355 SYMLINK libspdk_bdev_lvol.so 00:13:35.355 CC module/bdev/split/vbdev_split_rpc.o 00:13:35.613 SYMLINK libspdk_bdev_malloc.so 00:13:35.613 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:35.613 CC module/bdev/aio/bdev_aio.o 00:13:35.613 CC module/bdev/ftl/bdev_ftl.o 00:13:35.613 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:35.613 LIB libspdk_bdev_split.a 00:13:35.613 CC module/bdev/iscsi/bdev_iscsi.o 00:13:35.613 SO libspdk_bdev_split.so.6.0 00:13:35.613 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:35.871 SYMLINK libspdk_bdev_split.so 00:13:35.871 CC module/bdev/aio/bdev_aio_rpc.o 00:13:35.871 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:35.871 LIB libspdk_bdev_ftl.a 00:13:35.871 LIB libspdk_bdev_passthru.a 00:13:35.871 CC module/bdev/raid/bdev_raid_rpc.o 00:13:35.871 SO libspdk_bdev_ftl.so.6.0 00:13:35.871 SO libspdk_bdev_passthru.so.6.0 00:13:36.128 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:36.128 SYMLINK libspdk_bdev_ftl.so 00:13:36.128 SYMLINK libspdk_bdev_passthru.so 00:13:36.128 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:36.128 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:36.128 LIB libspdk_bdev_aio.a 00:13:36.128 LIB libspdk_bdev_zone_block.a 00:13:36.128 SO libspdk_bdev_aio.so.6.0 00:13:36.128 SO libspdk_bdev_zone_block.so.6.0 00:13:36.128 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:36.128 CC module/bdev/raid/bdev_raid_sb.o 00:13:36.128 SYMLINK libspdk_bdev_zone_block.so 00:13:36.128 SYMLINK libspdk_bdev_aio.so 00:13:36.128 CC module/bdev/raid/raid0.o 00:13:36.128 CC module/bdev/raid/raid1.o 00:13:36.385 CC module/bdev/nvme/nvme_rpc.o 00:13:36.385 LIB libspdk_bdev_iscsi.a 00:13:36.385 SO libspdk_bdev_iscsi.so.6.0 00:13:36.385 CC module/bdev/rbd/bdev_rbd.o 00:13:36.385 CC module/bdev/rbd/bdev_rbd_rpc.o 00:13:36.385 SYMLINK libspdk_bdev_iscsi.so 00:13:36.385 CC module/bdev/nvme/bdev_mdns_client.o 00:13:36.385 CC module/bdev/raid/concat.o 00:13:36.385 CC module/bdev/nvme/vbdev_opal.o 00:13:36.385 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:36.640 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:36.640 LIB libspdk_bdev_virtio.a 00:13:36.641 SO libspdk_bdev_virtio.so.6.0 00:13:36.641 LIB libspdk_bdev_raid.a 00:13:36.940 SYMLINK libspdk_bdev_virtio.so 00:13:36.940 SO libspdk_bdev_raid.so.6.0 00:13:36.940 LIB libspdk_bdev_rbd.a 00:13:36.940 SO libspdk_bdev_rbd.so.7.0 00:13:36.940 SYMLINK libspdk_bdev_raid.so 00:13:36.940 SYMLINK libspdk_bdev_rbd.so 00:13:37.877 LIB libspdk_bdev_nvme.a 00:13:37.877 SO libspdk_bdev_nvme.so.7.0 00:13:38.135 SYMLINK libspdk_bdev_nvme.so 00:13:38.701 CC module/event/subsystems/sock/sock.o 00:13:38.701 CC module/event/subsystems/scheduler/scheduler.o 00:13:38.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:38.701 CC module/event/subsystems/keyring/keyring.o 00:13:38.701 CC module/event/subsystems/iobuf/iobuf.o 00:13:38.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:38.701 CC module/event/subsystems/vmd/vmd.o 00:13:38.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:38.701 LIB libspdk_event_keyring.a 00:13:38.701 LIB libspdk_event_scheduler.a 00:13:38.701 SO libspdk_event_keyring.so.1.0 00:13:38.701 LIB libspdk_event_vhost_blk.a 00:13:38.701 LIB libspdk_event_sock.a 00:13:38.701 SO libspdk_event_scheduler.so.4.0 00:13:38.701 LIB libspdk_event_iobuf.a 00:13:38.701 LIB libspdk_event_vmd.a 00:13:38.701 SO libspdk_event_vhost_blk.so.3.0 00:13:38.701 SO libspdk_event_sock.so.5.0 00:13:38.959 SYMLINK libspdk_event_scheduler.so 00:13:38.959 SO libspdk_event_iobuf.so.3.0 00:13:38.959 SYMLINK libspdk_event_keyring.so 00:13:38.959 SO libspdk_event_vmd.so.6.0 00:13:38.959 SYMLINK libspdk_event_vhost_blk.so 00:13:38.959 SYMLINK libspdk_event_sock.so 00:13:38.959 SYMLINK libspdk_event_iobuf.so 00:13:38.959 SYMLINK libspdk_event_vmd.so 00:13:39.217 CC module/event/subsystems/accel/accel.o 00:13:39.474 LIB libspdk_event_accel.a 00:13:39.474 SO libspdk_event_accel.so.6.0 00:13:39.474 SYMLINK libspdk_event_accel.so 00:13:39.732 CC module/event/subsystems/bdev/bdev.o 00:13:39.990 LIB libspdk_event_bdev.a 00:13:39.990 SO libspdk_event_bdev.so.6.0 00:13:39.990 SYMLINK libspdk_event_bdev.so 00:13:40.248 CC module/event/subsystems/ublk/ublk.o 00:13:40.248 CC module/event/subsystems/scsi/scsi.o 00:13:40.248 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:40.248 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:40.248 CC module/event/subsystems/nbd/nbd.o 00:13:40.505 LIB libspdk_event_nbd.a 00:13:40.505 LIB libspdk_event_scsi.a 00:13:40.505 LIB libspdk_event_ublk.a 00:13:40.505 SO libspdk_event_nbd.so.6.0 00:13:40.505 SO libspdk_event_scsi.so.6.0 00:13:40.505 SO libspdk_event_ublk.so.3.0 00:13:40.505 SYMLINK libspdk_event_nbd.so 00:13:40.506 LIB libspdk_event_nvmf.a 00:13:40.506 SYMLINK libspdk_event_scsi.so 00:13:40.506 SYMLINK libspdk_event_ublk.so 00:13:40.763 SO libspdk_event_nvmf.so.6.0 00:13:40.763 SYMLINK libspdk_event_nvmf.so 00:13:40.763 CC module/event/subsystems/iscsi/iscsi.o 00:13:40.763 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:41.022 LIB libspdk_event_vhost_scsi.a 00:13:41.022 LIB libspdk_event_iscsi.a 00:13:41.022 SO libspdk_event_vhost_scsi.so.3.0 00:13:41.022 SO libspdk_event_iscsi.so.6.0 00:13:41.280 SYMLINK libspdk_event_vhost_scsi.so 00:13:41.280 SYMLINK libspdk_event_iscsi.so 00:13:41.280 SO libspdk.so.6.0 00:13:41.280 SYMLINK libspdk.so 00:13:41.538 CC app/trace_record/trace_record.o 00:13:41.538 CXX app/trace/trace.o 00:13:41.538 TEST_HEADER include/spdk/accel.h 00:13:41.538 TEST_HEADER include/spdk/accel_module.h 00:13:41.538 TEST_HEADER include/spdk/assert.h 00:13:41.538 TEST_HEADER include/spdk/barrier.h 00:13:41.538 TEST_HEADER include/spdk/base64.h 00:13:41.538 TEST_HEADER include/spdk/bdev.h 00:13:41.538 TEST_HEADER include/spdk/bdev_module.h 00:13:41.538 TEST_HEADER include/spdk/bdev_zone.h 00:13:41.538 TEST_HEADER include/spdk/bit_array.h 00:13:41.538 TEST_HEADER include/spdk/bit_pool.h 00:13:41.538 TEST_HEADER include/spdk/blob_bdev.h 00:13:41.538 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:41.538 TEST_HEADER include/spdk/blobfs.h 00:13:41.538 TEST_HEADER include/spdk/blob.h 00:13:41.538 TEST_HEADER include/spdk/conf.h 00:13:41.538 TEST_HEADER include/spdk/config.h 00:13:41.538 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:41.538 TEST_HEADER include/spdk/cpuset.h 00:13:41.538 TEST_HEADER include/spdk/crc16.h 00:13:41.538 TEST_HEADER include/spdk/crc32.h 00:13:41.538 TEST_HEADER include/spdk/crc64.h 00:13:41.797 TEST_HEADER include/spdk/dif.h 00:13:41.797 TEST_HEADER include/spdk/dma.h 00:13:41.797 CC app/nvmf_tgt/nvmf_main.o 00:13:41.797 TEST_HEADER include/spdk/endian.h 00:13:41.797 TEST_HEADER include/spdk/env_dpdk.h 00:13:41.797 TEST_HEADER include/spdk/env.h 00:13:41.797 TEST_HEADER include/spdk/event.h 00:13:41.797 TEST_HEADER include/spdk/fd_group.h 00:13:41.797 TEST_HEADER include/spdk/fd.h 00:13:41.797 TEST_HEADER include/spdk/file.h 00:13:41.797 TEST_HEADER include/spdk/ftl.h 00:13:41.797 TEST_HEADER include/spdk/gpt_spec.h 00:13:41.797 TEST_HEADER include/spdk/hexlify.h 00:13:41.797 TEST_HEADER include/spdk/histogram_data.h 00:13:41.797 TEST_HEADER include/spdk/idxd.h 00:13:41.797 TEST_HEADER include/spdk/idxd_spec.h 00:13:41.797 CC examples/ioat/perf/perf.o 00:13:41.797 TEST_HEADER include/spdk/init.h 00:13:41.797 CC examples/util/zipf/zipf.o 00:13:41.797 TEST_HEADER include/spdk/ioat.h 00:13:41.797 TEST_HEADER include/spdk/ioat_spec.h 00:13:41.797 TEST_HEADER include/spdk/iscsi_spec.h 00:13:41.797 TEST_HEADER include/spdk/json.h 00:13:41.797 CC test/thread/poller_perf/poller_perf.o 00:13:41.797 TEST_HEADER include/spdk/jsonrpc.h 00:13:41.797 TEST_HEADER include/spdk/keyring.h 00:13:41.797 TEST_HEADER include/spdk/keyring_module.h 00:13:41.797 TEST_HEADER include/spdk/likely.h 00:13:41.797 TEST_HEADER include/spdk/log.h 00:13:41.797 TEST_HEADER include/spdk/lvol.h 00:13:41.797 CC test/dma/test_dma/test_dma.o 00:13:41.797 TEST_HEADER include/spdk/memory.h 00:13:41.797 TEST_HEADER include/spdk/mmio.h 00:13:41.797 TEST_HEADER include/spdk/nbd.h 00:13:41.797 TEST_HEADER include/spdk/net.h 00:13:41.797 CC test/app/bdev_svc/bdev_svc.o 00:13:41.797 TEST_HEADER include/spdk/notify.h 00:13:41.798 TEST_HEADER include/spdk/nvme.h 00:13:41.798 TEST_HEADER include/spdk/nvme_intel.h 00:13:41.798 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:41.798 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:41.798 TEST_HEADER include/spdk/nvme_spec.h 00:13:41.798 TEST_HEADER include/spdk/nvme_zns.h 00:13:41.798 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:41.798 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:41.798 TEST_HEADER include/spdk/nvmf.h 00:13:41.798 TEST_HEADER include/spdk/nvmf_spec.h 00:13:41.798 TEST_HEADER include/spdk/nvmf_transport.h 00:13:41.798 TEST_HEADER include/spdk/opal.h 00:13:41.798 TEST_HEADER include/spdk/opal_spec.h 00:13:41.798 TEST_HEADER include/spdk/pci_ids.h 00:13:41.798 TEST_HEADER include/spdk/pipe.h 00:13:41.798 TEST_HEADER include/spdk/queue.h 00:13:41.798 TEST_HEADER include/spdk/reduce.h 00:13:41.798 TEST_HEADER include/spdk/rpc.h 00:13:41.798 TEST_HEADER include/spdk/scheduler.h 00:13:41.798 TEST_HEADER include/spdk/scsi.h 00:13:41.798 TEST_HEADER include/spdk/scsi_spec.h 00:13:41.798 TEST_HEADER include/spdk/sock.h 00:13:41.798 TEST_HEADER include/spdk/stdinc.h 00:13:41.798 TEST_HEADER include/spdk/string.h 00:13:41.798 TEST_HEADER include/spdk/thread.h 00:13:41.798 TEST_HEADER include/spdk/trace.h 00:13:41.798 TEST_HEADER include/spdk/trace_parser.h 00:13:41.798 TEST_HEADER include/spdk/tree.h 00:13:41.798 TEST_HEADER include/spdk/ublk.h 00:13:41.798 TEST_HEADER include/spdk/util.h 00:13:41.798 TEST_HEADER include/spdk/uuid.h 00:13:41.798 TEST_HEADER include/spdk/version.h 00:13:41.798 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:41.798 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:41.798 TEST_HEADER include/spdk/vhost.h 00:13:41.798 TEST_HEADER include/spdk/vmd.h 00:13:41.798 TEST_HEADER include/spdk/xor.h 00:13:41.798 TEST_HEADER include/spdk/zipf.h 00:13:41.798 LINK interrupt_tgt 00:13:41.798 CXX test/cpp_headers/accel.o 00:13:41.798 LINK spdk_trace_record 00:13:42.056 LINK zipf 00:13:42.056 LINK nvmf_tgt 00:13:42.056 LINK poller_perf 00:13:42.056 LINK bdev_svc 00:13:42.056 LINK ioat_perf 00:13:42.056 LINK spdk_trace 00:13:42.056 CXX test/cpp_headers/accel_module.o 00:13:42.314 CC examples/ioat/verify/verify.o 00:13:42.314 LINK test_dma 00:13:42.314 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:42.314 CXX test/cpp_headers/assert.o 00:13:42.314 CC test/app/histogram_perf/histogram_perf.o 00:13:42.314 CC examples/thread/thread/thread_ex.o 00:13:42.314 CC examples/sock/hello_world/hello_sock.o 00:13:42.314 CC examples/vmd/lsvmd/lsvmd.o 00:13:42.314 CC examples/vmd/led/led.o 00:13:42.314 CC app/iscsi_tgt/iscsi_tgt.o 00:13:42.572 LINK verify 00:13:42.572 LINK histogram_perf 00:13:42.572 CXX test/cpp_headers/barrier.o 00:13:42.572 LINK lsvmd 00:13:42.572 LINK led 00:13:42.572 CXX test/cpp_headers/base64.o 00:13:42.572 LINK iscsi_tgt 00:13:42.572 LINK thread 00:13:42.572 CXX test/cpp_headers/bdev.o 00:13:42.572 LINK hello_sock 00:13:42.830 CC test/env/mem_callbacks/mem_callbacks.o 00:13:42.830 CC app/spdk_lspci/spdk_lspci.o 00:13:42.830 LINK nvme_fuzz 00:13:42.830 CC app/spdk_tgt/spdk_tgt.o 00:13:42.830 CXX test/cpp_headers/bdev_module.o 00:13:42.830 CXX test/cpp_headers/bdev_zone.o 00:13:42.830 CC app/spdk_nvme_perf/perf.o 00:13:42.830 LINK spdk_lspci 00:13:43.087 CC test/event/event_perf/event_perf.o 00:13:43.087 CC test/event/reactor/reactor.o 00:13:43.087 LINK spdk_tgt 00:13:43.087 CXX test/cpp_headers/bit_array.o 00:13:43.087 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:43.087 CC examples/idxd/perf/perf.o 00:13:43.087 CC test/event/reactor_perf/reactor_perf.o 00:13:43.087 LINK event_perf 00:13:43.087 LINK reactor 00:13:43.345 CXX test/cpp_headers/bit_pool.o 00:13:43.345 CC test/event/app_repeat/app_repeat.o 00:13:43.345 LINK reactor_perf 00:13:43.345 CXX test/cpp_headers/blob_bdev.o 00:13:43.345 CXX test/cpp_headers/blobfs_bdev.o 00:13:43.345 CXX test/cpp_headers/blobfs.o 00:13:43.345 LINK mem_callbacks 00:13:43.345 CXX test/cpp_headers/blob.o 00:13:43.345 LINK app_repeat 00:13:43.604 LINK idxd_perf 00:13:43.604 CC test/env/vtophys/vtophys.o 00:13:43.604 CC examples/accel/perf/accel_perf.o 00:13:43.604 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:43.604 CXX test/cpp_headers/conf.o 00:13:43.604 CC test/rpc_client/rpc_client_test.o 00:13:43.604 CC test/nvme/aer/aer.o 00:13:43.863 CC test/event/scheduler/scheduler.o 00:13:43.863 LINK vtophys 00:13:43.863 CXX test/cpp_headers/config.o 00:13:43.863 LINK env_dpdk_post_init 00:13:43.863 LINK rpc_client_test 00:13:43.863 CXX test/cpp_headers/cpuset.o 00:13:43.863 LINK spdk_nvme_perf 00:13:44.121 CC test/accel/dif/dif.o 00:13:44.121 LINK scheduler 00:13:44.121 CXX test/cpp_headers/crc16.o 00:13:44.121 LINK aer 00:13:44.121 CC test/env/memory/memory_ut.o 00:13:44.121 CC test/blobfs/mkfs/mkfs.o 00:13:44.121 LINK accel_perf 00:13:44.121 CC app/spdk_nvme_identify/identify.o 00:13:44.379 CXX test/cpp_headers/crc32.o 00:13:44.379 CC test/lvol/esnap/esnap.o 00:13:44.379 CC test/env/pci/pci_ut.o 00:13:44.379 LINK mkfs 00:13:44.379 CC test/nvme/reset/reset.o 00:13:44.379 CXX test/cpp_headers/crc64.o 00:13:44.636 LINK dif 00:13:44.636 CC examples/blob/hello_world/hello_blob.o 00:13:44.636 CXX test/cpp_headers/dif.o 00:13:44.636 LINK reset 00:13:44.894 CC examples/blob/cli/blobcli.o 00:13:44.894 CXX test/cpp_headers/dma.o 00:13:44.894 LINK pci_ut 00:13:44.894 LINK hello_blob 00:13:44.894 CC test/app/jsoncat/jsoncat.o 00:13:44.894 CC test/nvme/sgl/sgl.o 00:13:45.152 CXX test/cpp_headers/endian.o 00:13:45.152 LINK jsoncat 00:13:45.152 LINK iscsi_fuzz 00:13:45.152 CXX test/cpp_headers/env_dpdk.o 00:13:45.152 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:45.410 LINK spdk_nvme_identify 00:13:45.410 CC test/app/stub/stub.o 00:13:45.410 LINK sgl 00:13:45.410 CC test/bdev/bdevio/bdevio.o 00:13:45.410 LINK memory_ut 00:13:45.410 CXX test/cpp_headers/env.o 00:13:45.410 LINK blobcli 00:13:45.410 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:45.668 LINK stub 00:13:45.668 CXX test/cpp_headers/event.o 00:13:45.668 CC app/spdk_nvme_discover/discovery_aer.o 00:13:45.668 CC test/nvme/e2edp/nvme_dp.o 00:13:45.668 CC examples/nvme/hello_world/hello_world.o 00:13:45.668 CC examples/nvme/reconnect/reconnect.o 00:13:45.668 CXX test/cpp_headers/fd_group.o 00:13:45.926 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:45.926 CC examples/nvme/arbitration/arbitration.o 00:13:45.926 LINK spdk_nvme_discover 00:13:45.926 LINK bdevio 00:13:45.926 CXX test/cpp_headers/fd.o 00:13:45.926 LINK nvme_dp 00:13:45.926 LINK hello_world 00:13:45.926 LINK vhost_fuzz 00:13:46.184 CC app/spdk_top/spdk_top.o 00:13:46.184 CXX test/cpp_headers/file.o 00:13:46.184 LINK reconnect 00:13:46.184 CC test/nvme/overhead/overhead.o 00:13:46.184 CC test/nvme/err_injection/err_injection.o 00:13:46.184 LINK arbitration 00:13:46.184 CC examples/nvme/hotplug/hotplug.o 00:13:46.184 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:46.184 CXX test/cpp_headers/ftl.o 00:13:46.441 CC examples/nvme/abort/abort.o 00:13:46.441 LINK nvme_manage 00:13:46.441 LINK err_injection 00:13:46.441 LINK cmb_copy 00:13:46.441 CXX test/cpp_headers/gpt_spec.o 00:13:46.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:46.441 LINK overhead 00:13:46.441 LINK hotplug 00:13:46.737 CXX test/cpp_headers/hexlify.o 00:13:46.737 CXX test/cpp_headers/histogram_data.o 00:13:46.737 LINK pmr_persistence 00:13:46.737 CC app/vhost/vhost.o 00:13:46.737 CC app/spdk_dd/spdk_dd.o 00:13:46.737 CC test/nvme/startup/startup.o 00:13:46.737 LINK abort 00:13:46.994 CXX test/cpp_headers/idxd.o 00:13:46.995 CC app/fio/nvme/fio_plugin.o 00:13:46.995 LINK vhost 00:13:46.995 CC app/fio/bdev/fio_plugin.o 00:13:46.995 LINK startup 00:13:46.995 CXX test/cpp_headers/idxd_spec.o 00:13:46.995 CC examples/bdev/hello_world/hello_bdev.o 00:13:47.252 LINK spdk_dd 00:13:47.252 CC examples/bdev/bdevperf/bdevperf.o 00:13:47.252 CXX test/cpp_headers/init.o 00:13:47.252 LINK spdk_top 00:13:47.252 CC test/nvme/reserve/reserve.o 00:13:47.252 CC test/nvme/simple_copy/simple_copy.o 00:13:47.510 LINK hello_bdev 00:13:47.510 CXX test/cpp_headers/ioat.o 00:13:47.510 CC test/nvme/connect_stress/connect_stress.o 00:13:47.510 CC test/nvme/boot_partition/boot_partition.o 00:13:47.510 LINK reserve 00:13:47.510 CXX test/cpp_headers/ioat_spec.o 00:13:47.510 LINK simple_copy 00:13:47.769 LINK spdk_bdev 00:13:47.769 LINK spdk_nvme 00:13:47.769 CC test/nvme/compliance/nvme_compliance.o 00:13:47.769 LINK boot_partition 00:13:47.769 CXX test/cpp_headers/iscsi_spec.o 00:13:47.769 LINK connect_stress 00:13:47.769 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:47.769 CC test/nvme/fused_ordering/fused_ordering.o 00:13:47.769 CC test/nvme/fdp/fdp.o 00:13:48.027 CC test/nvme/cuse/cuse.o 00:13:48.027 CXX test/cpp_headers/json.o 00:13:48.027 CXX test/cpp_headers/jsonrpc.o 00:13:48.027 CXX test/cpp_headers/keyring.o 00:13:48.027 LINK doorbell_aers 00:13:48.027 LINK fused_ordering 00:13:48.027 CXX test/cpp_headers/keyring_module.o 00:13:48.027 CXX test/cpp_headers/likely.o 00:13:48.027 LINK nvme_compliance 00:13:48.285 CXX test/cpp_headers/log.o 00:13:48.285 LINK bdevperf 00:13:48.285 CXX test/cpp_headers/lvol.o 00:13:48.285 CXX test/cpp_headers/memory.o 00:13:48.285 LINK fdp 00:13:48.285 CXX test/cpp_headers/mmio.o 00:13:48.285 CXX test/cpp_headers/nbd.o 00:13:48.285 CXX test/cpp_headers/net.o 00:13:48.285 CXX test/cpp_headers/notify.o 00:13:48.285 CXX test/cpp_headers/nvme.o 00:13:48.543 CXX test/cpp_headers/nvme_intel.o 00:13:48.543 CXX test/cpp_headers/nvme_ocssd.o 00:13:48.543 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:48.543 CXX test/cpp_headers/nvme_spec.o 00:13:48.543 CXX test/cpp_headers/nvme_zns.o 00:13:48.543 CXX test/cpp_headers/nvmf_cmd.o 00:13:48.543 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:48.543 CXX test/cpp_headers/nvmf.o 00:13:48.543 CXX test/cpp_headers/nvmf_spec.o 00:13:48.543 CC examples/nvmf/nvmf/nvmf.o 00:13:48.801 CXX test/cpp_headers/nvmf_transport.o 00:13:48.801 CXX test/cpp_headers/opal.o 00:13:48.801 CXX test/cpp_headers/opal_spec.o 00:13:48.801 CXX test/cpp_headers/pci_ids.o 00:13:48.801 CXX test/cpp_headers/pipe.o 00:13:48.801 CXX test/cpp_headers/queue.o 00:13:48.801 CXX test/cpp_headers/reduce.o 00:13:48.801 CXX test/cpp_headers/rpc.o 00:13:48.801 CXX test/cpp_headers/scheduler.o 00:13:48.801 CXX test/cpp_headers/scsi.o 00:13:49.059 CXX test/cpp_headers/scsi_spec.o 00:13:49.059 CXX test/cpp_headers/sock.o 00:13:49.059 CXX test/cpp_headers/stdinc.o 00:13:49.059 LINK nvmf 00:13:49.059 CXX test/cpp_headers/string.o 00:13:49.059 CXX test/cpp_headers/thread.o 00:13:49.059 CXX test/cpp_headers/trace.o 00:13:49.059 CXX test/cpp_headers/trace_parser.o 00:13:49.059 CXX test/cpp_headers/tree.o 00:13:49.059 CXX test/cpp_headers/ublk.o 00:13:49.059 CXX test/cpp_headers/util.o 00:13:49.059 CXX test/cpp_headers/uuid.o 00:13:49.316 CXX test/cpp_headers/version.o 00:13:49.316 CXX test/cpp_headers/vfio_user_pci.o 00:13:49.316 CXX test/cpp_headers/vfio_user_spec.o 00:13:49.316 CXX test/cpp_headers/vhost.o 00:13:49.316 CXX test/cpp_headers/vmd.o 00:13:49.316 CXX test/cpp_headers/xor.o 00:13:49.316 CXX test/cpp_headers/zipf.o 00:13:49.574 LINK cuse 00:13:51.473 LINK esnap 00:13:51.731 00:13:51.731 real 1m13.591s 00:13:51.731 user 7m10.698s 00:13:51.731 sys 1m37.537s 00:13:51.731 16:51:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:13:51.731 ************************************ 00:13:51.731 END TEST make 00:13:51.731 ************************************ 00:13:51.731 16:51:53 make -- common/autotest_common.sh@10 -- $ set +x 00:13:51.731 16:51:53 -- common/autotest_common.sh@1142 -- $ return 0 00:13:51.731 16:51:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:51.731 16:51:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:13:51.731 16:51:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:13:51.731 16:51:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:51.731 16:51:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:51.731 16:51:53 -- pm/common@44 -- $ pid=5307 00:13:51.731 16:51:53 -- pm/common@50 -- $ kill -TERM 5307 00:13:51.731 16:51:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:51.731 16:51:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:51.731 16:51:53 -- pm/common@44 -- $ pid=5309 00:13:51.731 16:51:53 -- pm/common@50 -- $ kill -TERM 5309 00:13:51.990 16:51:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.990 16:51:53 -- nvmf/common.sh@7 -- # uname -s 00:13:51.990 16:51:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.990 16:51:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.990 16:51:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.990 16:51:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.990 16:51:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.990 16:51:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.990 16:51:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.990 16:51:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.990 16:51:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.990 16:51:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.990 16:51:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:13:51.990 16:51:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:13:51.990 16:51:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.990 16:51:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.990 16:51:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:51.990 16:51:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.990 16:51:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.990 16:51:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.990 16:51:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.990 16:51:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.990 16:51:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.990 16:51:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.990 16:51:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.990 16:51:53 -- paths/export.sh@5 -- # export PATH 00:13:51.990 16:51:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.990 16:51:53 -- nvmf/common.sh@47 -- # : 0 00:13:51.990 16:51:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.990 16:51:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.990 16:51:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.990 16:51:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.990 16:51:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.990 16:51:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.990 16:51:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.990 16:51:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.990 16:51:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:51.990 16:51:53 -- spdk/autotest.sh@32 -- # uname -s 00:13:51.990 16:51:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:51.990 16:51:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:51.990 16:51:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:51.990 16:51:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:51.990 16:51:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:51.990 16:51:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:51.990 16:51:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:51.990 16:51:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:51.990 16:51:53 -- spdk/autotest.sh@48 -- # udevadm_pid=53015 00:13:51.990 16:51:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:51.990 16:51:53 -- pm/common@17 -- # local monitor 00:13:51.990 16:51:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:51.990 16:51:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:51.990 16:51:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:51.990 16:51:53 -- pm/common@25 -- # sleep 1 00:13:51.990 16:51:53 -- pm/common@21 -- # date +%s 00:13:51.990 16:51:53 -- pm/common@21 -- # date +%s 00:13:51.990 16:51:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721667113 00:13:51.990 16:51:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721667113 00:13:51.990 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721667113_collect-vmstat.pm.log 00:13:51.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721667113_collect-cpu-load.pm.log 00:13:52.925 16:51:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:52.925 16:51:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:52.925 16:51:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.925 16:51:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.925 16:51:54 -- spdk/autotest.sh@59 -- # create_test_list 00:13:52.925 16:51:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:13:52.925 16:51:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.925 16:51:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:52.925 16:51:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:52.925 16:51:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:52.925 16:51:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:52.925 16:51:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:52.925 16:51:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:52.925 16:51:54 -- common/autotest_common.sh@1455 -- # uname 00:13:52.925 16:51:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:13:52.925 16:51:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:52.925 16:51:54 -- common/autotest_common.sh@1475 -- # uname 00:13:53.183 16:51:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:13:53.183 16:51:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:13:53.183 16:51:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:13:53.183 16:51:54 -- spdk/autotest.sh@72 -- # hash lcov 00:13:53.183 16:51:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:53.183 16:51:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:13:53.183 --rc lcov_branch_coverage=1 00:13:53.183 --rc lcov_function_coverage=1 00:13:53.183 --rc genhtml_branch_coverage=1 00:13:53.183 --rc genhtml_function_coverage=1 00:13:53.183 --rc genhtml_legend=1 00:13:53.183 --rc geninfo_all_blocks=1 00:13:53.183 ' 00:13:53.183 16:51:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:13:53.183 --rc lcov_branch_coverage=1 00:13:53.183 --rc lcov_function_coverage=1 00:13:53.183 --rc genhtml_branch_coverage=1 00:13:53.183 --rc genhtml_function_coverage=1 00:13:53.183 --rc genhtml_legend=1 00:13:53.183 --rc geninfo_all_blocks=1 00:13:53.183 ' 00:13:53.183 16:51:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:13:53.183 --rc lcov_branch_coverage=1 00:13:53.183 --rc lcov_function_coverage=1 00:13:53.183 --rc genhtml_branch_coverage=1 00:13:53.183 --rc genhtml_function_coverage=1 00:13:53.183 --rc genhtml_legend=1 00:13:53.183 --rc geninfo_all_blocks=1 00:13:53.183 --no-external' 00:13:53.183 16:51:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:13:53.183 --rc lcov_branch_coverage=1 00:13:53.183 --rc lcov_function_coverage=1 00:13:53.183 --rc genhtml_branch_coverage=1 00:13:53.183 --rc genhtml_function_coverage=1 00:13:53.183 --rc genhtml_legend=1 00:13:53.183 --rc geninfo_all_blocks=1 00:13:53.183 --no-external' 00:13:53.183 16:51:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:13:53.183 lcov: LCOV version 1.14 00:13:53.183 16:51:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:11.275 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:11.275 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:21.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:21.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:21.305 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:21.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:21.306 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:21.306 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:21.306 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:21.306 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:21.306 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:21.306 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:24.586 16:52:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:24.586 16:52:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:24.586 16:52:26 -- common/autotest_common.sh@10 -- # set +x 00:14:24.586 16:52:26 -- spdk/autotest.sh@91 -- # rm -f 00:14:24.586 16:52:26 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:25.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.410 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:25.410 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:25.410 16:52:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:25.410 16:52:26 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:25.410 16:52:26 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:25.410 16:52:26 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:25.410 16:52:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:25.410 16:52:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:25.410 16:52:26 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:25.410 16:52:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:25.410 16:52:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:25.410 16:52:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:25.410 16:52:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:25.410 16:52:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:14:25.410 16:52:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:14:25.410 16:52:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:25.410 16:52:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:14:25.410 16:52:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:14:25.410 16:52:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:25.410 16:52:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:25.410 16:52:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:25.410 16:52:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:25.410 16:52:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:25.410 16:52:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:14:25.410 16:52:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:14:25.410 16:52:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:25.410 No valid GPT data, bailing 00:14:25.410 16:52:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:25.410 16:52:26 -- scripts/common.sh@391 -- # pt= 00:14:25.410 16:52:26 -- scripts/common.sh@392 -- # return 1 00:14:25.410 16:52:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:25.410 1+0 records in 00:14:25.410 1+0 records out 00:14:25.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517537 s, 203 MB/s 00:14:25.410 16:52:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:25.410 16:52:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:25.410 16:52:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:14:25.410 16:52:26 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:14:25.410 16:52:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:25.410 No valid GPT data, bailing 00:14:25.410 16:52:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:25.410 16:52:26 -- scripts/common.sh@391 -- # pt= 00:14:25.410 16:52:26 -- scripts/common.sh@392 -- # return 1 00:14:25.410 16:52:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:25.410 1+0 records in 00:14:25.410 1+0 records out 00:14:25.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470619 s, 223 MB/s 00:14:25.410 16:52:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:25.410 16:52:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:25.410 16:52:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:14:25.410 16:52:26 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:14:25.410 16:52:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:25.668 No valid GPT data, bailing 00:14:25.668 16:52:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:25.668 16:52:27 -- scripts/common.sh@391 -- # pt= 00:14:25.668 16:52:27 -- scripts/common.sh@392 -- # return 1 00:14:25.668 16:52:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:25.668 1+0 records in 00:14:25.668 1+0 records out 00:14:25.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488186 s, 215 MB/s 00:14:25.668 16:52:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:25.668 16:52:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:25.668 16:52:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:14:25.668 16:52:27 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:14:25.668 16:52:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:25.668 No valid GPT data, bailing 00:14:25.668 16:52:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:25.668 16:52:27 -- scripts/common.sh@391 -- # pt= 00:14:25.668 16:52:27 -- scripts/common.sh@392 -- # return 1 00:14:25.668 16:52:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:25.668 1+0 records in 00:14:25.668 1+0 records out 00:14:25.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528495 s, 198 MB/s 00:14:25.668 16:52:27 -- spdk/autotest.sh@118 -- # sync 00:14:25.668 16:52:27 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:25.668 16:52:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:25.668 16:52:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:27.601 16:52:29 -- spdk/autotest.sh@124 -- # uname -s 00:14:27.601 16:52:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:27.601 16:52:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:27.601 16:52:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:27.601 16:52:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.601 16:52:29 -- common/autotest_common.sh@10 -- # set +x 00:14:27.601 ************************************ 00:14:27.601 START TEST setup.sh 00:14:27.601 ************************************ 00:14:27.601 16:52:29 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:27.601 * Looking for test storage... 00:14:27.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:27.601 16:52:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:14:27.601 16:52:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:27.601 16:52:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:27.601 16:52:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:27.601 16:52:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.601 16:52:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:27.601 ************************************ 00:14:27.601 START TEST acl 00:14:27.601 ************************************ 00:14:27.601 16:52:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:27.860 * Looking for test storage... 00:14:27.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:27.860 16:52:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:14:27.860 16:52:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:14:27.860 16:52:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:27.860 16:52:29 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:28.427 16:52:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:14:28.427 16:52:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:14:28.427 16:52:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:28.427 16:52:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:14:28.427 16:52:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:14:28.427 16:52:29 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 Hugepages 00:14:29.362 node hugesize free / total 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 00:14:29.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:14:29.362 16:52:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:14:29.362 16:52:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:29.362 16:52:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.362 16:52:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:29.362 ************************************ 00:14:29.362 START TEST denied 00:14:29.362 ************************************ 00:14:29.362 16:52:30 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:14:29.362 16:52:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:29.362 16:52:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:14:29.362 16:52:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:29.362 16:52:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:14:29.362 16:52:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:30.296 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:30.296 16:52:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:30.867 00:14:30.867 real 0m1.451s 00:14:30.867 user 0m0.560s 00:14:30.867 sys 0m0.836s 00:14:30.867 16:52:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.867 16:52:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:14:30.867 ************************************ 00:14:30.867 END TEST denied 00:14:30.867 ************************************ 00:14:30.867 16:52:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:14:30.867 16:52:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:30.867 16:52:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:30.867 16:52:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.867 16:52:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:30.867 ************************************ 00:14:30.867 START TEST allowed 00:14:30.867 ************************************ 00:14:30.867 16:52:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:14:30.867 16:52:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:30.867 16:52:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:14:30.867 16:52:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:30.867 16:52:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:14:30.867 16:52:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:31.803 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:31.803 16:52:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:32.370 00:14:32.370 real 0m1.545s 00:14:32.370 user 0m0.671s 00:14:32.370 sys 0m0.858s 00:14:32.370 16:52:33 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.370 ************************************ 00:14:32.370 END TEST allowed 00:14:32.370 ************************************ 00:14:32.370 16:52:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:14:32.370 16:52:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:14:32.370 ************************************ 00:14:32.370 END TEST acl 00:14:32.370 ************************************ 00:14:32.370 00:14:32.370 real 0m4.815s 00:14:32.370 user 0m2.074s 00:14:32.370 sys 0m2.681s 00:14:32.370 16:52:33 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.370 16:52:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:32.630 16:52:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:32.630 16:52:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:32.630 16:52:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:32.630 16:52:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.630 16:52:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:32.630 ************************************ 00:14:32.630 START TEST hugepages 00:14:32.630 ************************************ 00:14:32.630 16:52:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:32.630 * Looking for test storage... 00:14:32.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5830652 kB' 'MemAvailable: 7387384 kB' 'Buffers: 2436 kB' 'Cached: 1770772 kB' 'SwapCached: 0 kB' 'Active: 435788 kB' 'Inactive: 1442652 kB' 'Active(anon): 115720 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 106868 kB' 'Mapped: 48736 kB' 'Shmem: 10488 kB' 'KReclaimable: 61904 kB' 'Slab: 134156 kB' 'SReclaimable: 61904 kB' 'SUnreclaim: 72252 kB' 'KernelStack: 6492 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.630 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.631 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:32.632 16:52:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:32.632 16:52:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:32.632 16:52:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.632 16:52:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:32.632 ************************************ 00:14:32.632 START TEST default_setup 00:14:32.632 ************************************ 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:14:32.632 16:52:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:33.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.460 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.460 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7932184 kB' 'MemAvailable: 9488776 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452408 kB' 'Inactive: 1442660 kB' 'Active(anon): 132340 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442660 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123444 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 61604 kB' 'Slab: 133844 kB' 'SReclaimable: 61604 kB' 'SUnreclaim: 72240 kB' 'KernelStack: 6464 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.460 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.461 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7932184 kB' 'MemAvailable: 9488780 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1442664 kB' 'Active(anon): 132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61604 kB' 'Slab: 133844 kB' 'SReclaimable: 61604 kB' 'SUnreclaim: 72240 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.462 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.724 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.725 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7932184 kB' 'MemAvailable: 9488780 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1442664 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61604 kB' 'Slab: 133844 kB' 'SReclaimable: 61604 kB' 'SUnreclaim: 72240 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.726 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.727 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:33.728 nr_hugepages=1024 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:33.728 resv_hugepages=0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:33.728 surplus_hugepages=0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:33.728 anon_hugepages=0 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7932184 kB' 'MemAvailable: 9488780 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452152 kB' 'Inactive: 1442664 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61604 kB' 'Slab: 133840 kB' 'SReclaimable: 61604 kB' 'SUnreclaim: 72236 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.728 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.729 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:33.730 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7932184 kB' 'MemUsed: 4309796 kB' 'SwapCached: 0 kB' 'Active: 452396 kB' 'Inactive: 1442664 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1773196 kB' 'Mapped: 48760 kB' 'AnonPages: 123452 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61604 kB' 'Slab: 133836 kB' 'SReclaimable: 61604 kB' 'SUnreclaim: 72232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.731 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:33.732 node0=1024 expecting 1024 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:33.732 00:14:33.732 real 0m1.043s 00:14:33.732 user 0m0.476s 00:14:33.732 sys 0m0.466s 00:14:33.732 ************************************ 00:14:33.732 END TEST default_setup 00:14:33.732 ************************************ 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.732 16:52:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:14:33.732 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:33.732 16:52:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:33.732 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:33.732 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.732 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:33.732 ************************************ 00:14:33.732 START TEST per_node_1G_alloc 00:14:33.732 ************************************ 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:33.732 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:33.733 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:33.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.256 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.256 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8986960 kB' 'MemAvailable: 10543552 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452392 kB' 'Inactive: 1442664 kB' 'Active(anon): 132324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123404 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133824 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72224 kB' 'KernelStack: 6504 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.256 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.257 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8987016 kB' 'MemAvailable: 10543608 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1442664 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123344 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133820 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72220 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.258 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.259 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8987016 kB' 'MemAvailable: 10543608 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 451980 kB' 'Inactive: 1442664 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123020 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133828 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72228 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.260 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.261 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.262 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.263 nr_hugepages=512 00:14:34.263 resv_hugepages=0 00:14:34.263 surplus_hugepages=0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:34.263 anon_hugepages=0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8987016 kB' 'MemAvailable: 10543608 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 451964 kB' 'Inactive: 1442664 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133828 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72228 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.263 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.264 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8987016 kB' 'MemUsed: 3254964 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1442664 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1773196 kB' 'Mapped: 48760 kB' 'AnonPages: 123568 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133828 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.265 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.266 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:34.267 node0=512 expecting 512 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:34.267 00:14:34.267 real 0m0.546s 00:14:34.267 user 0m0.278s 00:14:34.267 sys 0m0.276s 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.267 16:52:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:34.267 ************************************ 00:14:34.267 END TEST per_node_1G_alloc 00:14:34.267 ************************************ 00:14:34.267 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:34.267 16:52:35 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:34.267 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.267 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.267 16:52:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:34.526 ************************************ 00:14:34.526 START TEST even_2G_alloc 00:14:34.526 ************************************ 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:34.526 16:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:34.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.790 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.790 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7939096 kB' 'MemAvailable: 9495688 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452580 kB' 'Inactive: 1442664 kB' 'Active(anon): 132512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123844 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133836 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72236 kB' 'KernelStack: 6504 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.790 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.791 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7940216 kB' 'MemAvailable: 9496808 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 451988 kB' 'Inactive: 1442664 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123288 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133836 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72236 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.792 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.793 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7940216 kB' 'MemAvailable: 9496808 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 451948 kB' 'Inactive: 1442664 kB' 'Active(anon): 131880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133836 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72236 kB' 'KernelStack: 6464 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.794 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.795 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:34.796 nr_hugepages=1024 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:34.796 resv_hugepages=0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:34.796 surplus_hugepages=0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:34.796 anon_hugepages=0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:34.796 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7941352 kB' 'MemAvailable: 9497944 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 451952 kB' 'Inactive: 1442664 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123304 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133836 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72236 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.797 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.798 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942652 kB' 'MemUsed: 4299328 kB' 'SwapCached: 0 kB' 'Active: 452200 kB' 'Inactive: 1442664 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1773196 kB' 'Mapped: 48760 kB' 'AnonPages: 123332 kB' 'Shmem: 10464 kB' 'KernelStack: 6512 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133832 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.799 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:34.800 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:34.801 node0=1024 expecting 1024 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:34.801 00:14:34.801 real 0m0.514s 00:14:34.801 user 0m0.257s 00:14:34.801 sys 0m0.291s 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.801 16:52:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:34.801 ************************************ 00:14:34.801 END TEST even_2G_alloc 00:14:34.801 ************************************ 00:14:35.060 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:35.060 16:52:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:35.060 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:35.060 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.060 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:35.060 ************************************ 00:14:35.060 START TEST odd_alloc 00:14:35.060 ************************************ 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:35.060 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:35.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:35.323 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.323 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7944328 kB' 'MemAvailable: 9500920 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452584 kB' 'Inactive: 1442664 kB' 'Active(anon): 132516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123920 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133852 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72252 kB' 'KernelStack: 6452 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7944588 kB' 'MemAvailable: 9501180 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452168 kB' 'Inactive: 1442664 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 49020 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133868 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72268 kB' 'KernelStack: 6496 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.324 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7945016 kB' 'MemAvailable: 9501612 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 451952 kB' 'Inactive: 1442668 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133812 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72212 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.325 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:35.326 nr_hugepages=1025 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:35.326 resv_hugepages=0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:35.326 surplus_hugepages=0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:35.326 anon_hugepages=0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7945268 kB' 'MemAvailable: 9501864 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 451928 kB' 'Inactive: 1442668 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133812 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72212 kB' 'KernelStack: 6432 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.326 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:35.327 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7945268 kB' 'MemUsed: 4296712 kB' 'SwapCached: 0 kB' 'Active: 452292 kB' 'Inactive: 1442668 kB' 'Active(anon): 132224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1773200 kB' 'Mapped: 48760 kB' 'AnonPages: 123372 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133812 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.587 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:35.588 node0=1025 expecting 1025 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:35.588 00:14:35.588 real 0m0.521s 00:14:35.588 user 0m0.282s 00:14:35.588 sys 0m0.274s 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.588 16:52:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:35.588 ************************************ 00:14:35.588 END TEST odd_alloc 00:14:35.588 ************************************ 00:14:35.588 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:35.588 16:52:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:35.588 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:35.589 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.589 16:52:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:35.589 ************************************ 00:14:35.589 START TEST custom_alloc 00:14:35.589 ************************************ 00:14:35.589 16:52:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:35.589 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:35.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:35.851 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.851 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.851 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9000240 kB' 'MemAvailable: 10556836 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 453296 kB' 'Inactive: 1442668 kB' 'Active(anon): 133228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 49088 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133780 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72180 kB' 'KernelStack: 6616 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.852 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9000240 kB' 'MemAvailable: 10556836 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 1442668 kB' 'Active(anon): 132776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123716 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133784 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72184 kB' 'KernelStack: 6544 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.853 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.854 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.855 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9000388 kB' 'MemAvailable: 10556984 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1442668 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123536 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133760 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72160 kB' 'KernelStack: 6480 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.856 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:35.858 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:36.120 nr_hugepages=512 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:36.120 resv_hugepages=0 00:14:36.120 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:36.121 surplus_hugepages=0 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:36.121 anon_hugepages=0 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9000388 kB' 'MemAvailable: 10556984 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1442668 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123372 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133760 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72160 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.121 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.122 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9000388 kB' 'MemUsed: 3241592 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1442668 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1773200 kB' 'Mapped: 48760 kB' 'AnonPages: 123372 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133752 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.123 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:36.124 node0=512 expecting 512 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:36.124 00:14:36.124 real 0m0.524s 00:14:36.124 user 0m0.265s 00:14:36.124 sys 0m0.293s 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.124 16:52:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:36.124 ************************************ 00:14:36.124 END TEST custom_alloc 00:14:36.124 ************************************ 00:14:36.124 16:52:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:36.124 16:52:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:36.124 16:52:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:36.124 16:52:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.124 16:52:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:36.124 ************************************ 00:14:36.124 START TEST no_shrink_alloc 00:14:36.124 ************************************ 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:36.124 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:36.125 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:36.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.386 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.386 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951844 kB' 'MemAvailable: 9508440 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452504 kB' 'Inactive: 1442668 kB' 'Active(anon): 132436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123820 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133756 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72156 kB' 'KernelStack: 6468 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.386 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.387 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7952480 kB' 'MemAvailable: 9509076 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452044 kB' 'Inactive: 1442668 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123144 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133752 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72152 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.388 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.389 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.390 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.652 16:52:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.652 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7952480 kB' 'MemAvailable: 9509076 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1442668 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123420 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133752 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72152 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.653 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.654 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:36.655 nr_hugepages=1024 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:36.655 resv_hugepages=0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:36.655 surplus_hugepages=0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:36.655 anon_hugepages=0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7952480 kB' 'MemAvailable: 9509076 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452036 kB' 'Inactive: 1442668 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133752 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72152 kB' 'KernelStack: 6464 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.655 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.656 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7952480 kB' 'MemUsed: 4289500 kB' 'SwapCached: 0 kB' 'Active: 452020 kB' 'Inactive: 1442668 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1773200 kB' 'Mapped: 48760 kB' 'AnonPages: 123320 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133752 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.657 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.658 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:36.659 node0=1024 expecting 1024 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:36.659 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:36.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.919 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.919 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.919 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.919 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951760 kB' 'MemAvailable: 9508352 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452808 kB' 'Inactive: 1442664 kB' 'Active(anon): 132740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133756 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72156 kB' 'KernelStack: 6520 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.920 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951876 kB' 'MemAvailable: 9508468 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452200 kB' 'Inactive: 1442664 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123292 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133744 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72144 kB' 'KernelStack: 6440 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.921 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951876 kB' 'MemAvailable: 9508468 kB' 'Buffers: 2436 kB' 'Cached: 1770760 kB' 'SwapCached: 0 kB' 'Active: 452200 kB' 'Inactive: 1442664 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123552 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133744 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72144 kB' 'KernelStack: 6508 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.922 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:36.923 nr_hugepages=1024 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:36.923 resv_hugepages=0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:36.923 surplus_hugepages=0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:36.923 anon_hugepages=0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:36.923 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951624 kB' 'MemAvailable: 9508220 kB' 'Buffers: 2436 kB' 'Cached: 1770764 kB' 'SwapCached: 0 kB' 'Active: 452032 kB' 'Inactive: 1442668 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123072 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61600 kB' 'Slab: 133716 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72116 kB' 'KernelStack: 6464 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.184 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.185 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7951624 kB' 'MemUsed: 4290356 kB' 'SwapCached: 0 kB' 'Active: 452044 kB' 'Inactive: 1442668 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1442668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1773200 kB' 'Mapped: 48764 kB' 'AnonPages: 123396 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61600 kB' 'Slab: 133716 kB' 'SReclaimable: 61600 kB' 'SUnreclaim: 72116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.186 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.187 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:37.188 node0=1024 expecting 1024 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:37.188 00:14:37.188 real 0m1.012s 00:14:37.188 user 0m0.517s 00:14:37.188 sys 0m0.562s 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.188 16:52:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:37.188 ************************************ 00:14:37.188 END TEST no_shrink_alloc 00:14:37.188 ************************************ 00:14:37.188 16:52:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:37.188 16:52:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:37.188 ************************************ 00:14:37.188 END TEST hugepages 00:14:37.188 ************************************ 00:14:37.188 00:14:37.188 real 0m4.613s 00:14:37.188 user 0m2.245s 00:14:37.188 sys 0m2.421s 00:14:37.188 16:52:38 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.188 16:52:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:37.188 16:52:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:37.188 16:52:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:37.188 16:52:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:37.188 16:52:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.188 16:52:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:37.188 ************************************ 00:14:37.188 START TEST driver 00:14:37.188 ************************************ 00:14:37.188 16:52:38 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:37.188 * Looking for test storage... 00:14:37.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:37.188 16:52:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:14:37.188 16:52:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:37.188 16:52:38 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:37.754 16:52:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:14:37.754 16:52:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:37.754 16:52:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.754 16:52:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 ************************************ 00:14:37.754 START TEST guess_driver 00:14:37.754 ************************************ 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:14:37.754 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:14:37.754 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:14:38.012 Looking for driver=uio_pci_generic 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:14:38.012 16:52:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:38.579 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:38.838 16:52:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:39.434 00:14:39.434 real 0m1.471s 00:14:39.434 user 0m0.569s 00:14:39.434 sys 0m0.882s 00:14:39.434 16:52:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.434 16:52:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 ************************************ 00:14:39.434 END TEST guess_driver 00:14:39.434 ************************************ 00:14:39.434 16:52:40 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:14:39.434 00:14:39.434 real 0m2.181s 00:14:39.434 user 0m0.809s 00:14:39.434 sys 0m1.396s 00:14:39.434 16:52:40 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.434 16:52:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 ************************************ 00:14:39.434 END TEST driver 00:14:39.434 ************************************ 00:14:39.434 16:52:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:39.434 16:52:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:39.434 16:52:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:39.434 16:52:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.435 16:52:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:39.435 ************************************ 00:14:39.435 START TEST devices 00:14:39.435 ************************************ 00:14:39.435 16:52:40 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:39.435 * Looking for test storage... 00:14:39.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:39.435 16:52:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:14:39.435 16:52:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:14:39.435 16:52:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:39.435 16:52:40 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:40.423 16:52:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:14:40.423 No valid GPT data, bailing 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:14:40.423 No valid GPT data, bailing 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:14:40.423 No valid GPT data, bailing 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:40.423 16:52:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:40.423 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:14:40.423 16:52:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:14:40.424 16:52:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:14:40.424 16:52:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:40.424 16:52:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:14:40.424 16:52:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:14:40.424 16:52:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:14:40.424 No valid GPT data, bailing 00:14:40.424 16:52:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:40.424 16:52:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:40.424 16:52:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:40.424 16:52:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:14:40.424 16:52:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:14:40.424 16:52:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:14:40.424 16:52:42 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:14:40.683 16:52:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:14:40.683 16:52:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.683 16:52:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.683 16:52:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 ************************************ 00:14:40.683 START TEST nvme_mount 00:14:40.683 ************************************ 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:40.683 16:52:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:14:41.618 Creating new GPT entries in memory. 00:14:41.618 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:41.618 other utilities. 00:14:41.618 16:52:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:41.618 16:52:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:41.618 16:52:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:41.618 16:52:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:41.618 16:52:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:42.584 Creating new GPT entries in memory. 00:14:42.584 The operation has completed successfully. 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57242 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:42.584 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:42.585 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:42.585 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:42.842 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:43.101 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:43.101 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:43.358 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:43.359 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:43.359 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:43.359 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:14:43.359 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:43.630 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:43.631 16:52:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.631 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:43.889 16:52:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.147 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.405 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.406 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.406 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.406 16:52:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:44.406 16:52:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:44.664 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:44.664 00:14:44.664 real 0m3.985s 00:14:44.664 user 0m0.682s 00:14:44.664 sys 0m1.028s 00:14:44.664 16:52:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.664 16:52:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:14:44.664 ************************************ 00:14:44.664 END TEST nvme_mount 00:14:44.664 ************************************ 00:14:44.664 16:52:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:14:44.664 16:52:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:14:44.664 16:52:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:44.664 16:52:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.664 16:52:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:44.664 ************************************ 00:14:44.664 START TEST dm_mount 00:14:44.664 ************************************ 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:14:44.664 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:44.665 16:52:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:45.600 Creating new GPT entries in memory. 00:14:45.600 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:45.600 other utilities. 00:14:45.600 16:52:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:45.601 16:52:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:45.601 16:52:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:45.601 16:52:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:45.601 16:52:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:46.535 Creating new GPT entries in memory. 00:14:46.535 The operation has completed successfully. 00:14:46.535 16:52:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:46.535 16:52:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:46.535 16:52:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:46.535 16:52:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:46.535 16:52:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:47.947 The operation has completed successfully. 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57673 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:47.947 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:48.226 16:52:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.484 16:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.484 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.484 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:48.742 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:48.742 00:14:48.742 real 0m4.165s 00:14:48.742 user 0m0.465s 00:14:48.742 sys 0m0.651s 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.742 16:52:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:14:48.742 ************************************ 00:14:48.742 END TEST dm_mount 00:14:48.742 ************************************ 00:14:48.742 16:52:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:48.742 16:52:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:49.001 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:49.001 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:49.001 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:49.001 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:49.001 16:52:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:49.001 00:14:49.001 real 0m9.676s 00:14:49.001 user 0m1.779s 00:14:49.001 sys 0m2.282s 00:14:49.001 16:52:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.001 16:52:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:49.001 ************************************ 00:14:49.001 END TEST devices 00:14:49.001 ************************************ 00:14:49.259 16:52:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:49.259 00:14:49.259 real 0m21.564s 00:14:49.259 user 0m6.986s 00:14:49.259 sys 0m8.968s 00:14:49.259 16:52:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.259 16:52:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:49.259 ************************************ 00:14:49.259 END TEST setup.sh 00:14:49.259 ************************************ 00:14:49.259 16:52:50 -- common/autotest_common.sh@1142 -- # return 0 00:14:49.259 16:52:50 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:49.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.826 Hugepages 00:14:49.826 node hugesize free / total 00:14:49.826 node0 1048576kB 0 / 0 00:14:49.826 node0 2048kB 2048 / 2048 00:14:49.826 00:14:49.826 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:49.826 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:49.826 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:50.084 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:14:50.084 16:52:51 -- spdk/autotest.sh@130 -- # uname -s 00:14:50.084 16:52:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:50.084 16:52:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:50.084 16:52:51 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:50.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:50.650 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:50.908 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:50.908 16:52:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:14:51.842 16:52:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:14:51.842 16:52:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:14:51.842 16:52:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:14:51.842 16:52:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:14:51.842 16:52:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:51.842 16:52:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:51.842 16:52:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:51.842 16:52:53 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:51.842 16:52:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:51.842 16:52:53 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:14:51.842 16:52:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:51.842 16:52:53 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:52.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:52.408 Waiting for block devices as requested 00:14:52.408 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:52.408 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:52.408 16:52:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:52.408 16:52:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:52.408 16:52:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:14:52.408 16:52:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:52.408 16:52:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:14:52.408 16:52:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:52.408 16:52:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:52.408 16:52:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:52.408 16:52:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:52.408 16:52:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:52.408 16:52:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:14:52.408 16:52:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:52.408 16:52:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:52.408 16:52:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:52.408 16:52:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:52.408 16:52:53 -- common/autotest_common.sh@1557 -- # continue 00:14:52.408 16:52:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:52.408 16:52:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:52.409 16:52:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:14:52.409 16:52:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:52.409 16:52:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:14:52.409 16:52:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:52.409 16:52:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:52.409 16:52:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:52.409 16:52:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:52.409 16:52:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:52.409 16:52:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:14:52.409 16:52:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:52.409 16:52:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:52.409 16:52:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:52.409 16:52:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:52.409 16:52:53 -- common/autotest_common.sh@1557 -- # continue 00:14:52.409 16:52:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:52.409 16:52:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.409 16:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:52.667 16:52:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:52.667 16:52:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.667 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:52.667 16:52:54 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:53.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:53.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:53.234 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:53.234 16:52:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:53.234 16:52:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.234 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.614 16:52:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:53.614 16:52:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:14:53.614 16:52:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:14:53.614 16:52:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:14:53.614 16:52:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:14:53.614 16:52:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:14:53.614 16:52:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:53.614 16:52:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:53.614 16:52:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:53.614 16:52:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:53.614 16:52:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:53.614 16:52:54 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:14:53.614 16:52:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:53.614 16:52:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.614 16:52:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:53.614 16:52:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.614 16:52:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.614 16:52:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.614 16:52:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:53.614 16:52:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.614 16:52:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.614 16:52:54 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:14:53.614 16:52:54 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:14:53.614 16:52:54 -- common/autotest_common.sh@1593 -- # return 0 00:14:53.614 16:52:54 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:14:53.614 16:52:54 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:14:53.614 16:52:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:53.614 16:52:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:53.614 16:52:54 -- spdk/autotest.sh@162 -- # timing_enter lib 00:14:53.614 16:52:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.614 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.614 16:52:54 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:14:53.614 16:52:54 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:53.614 16:52:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.614 16:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.614 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:53.614 ************************************ 00:14:53.614 START TEST env 00:14:53.614 ************************************ 00:14:53.614 16:52:54 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:53.614 * Looking for test storage... 00:14:53.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:53.614 16:52:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:53.614 16:52:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.614 16:52:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.614 16:52:55 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.614 ************************************ 00:14:53.614 START TEST env_memory 00:14:53.614 ************************************ 00:14:53.614 16:52:55 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:53.614 00:14:53.614 00:14:53.614 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.614 http://cunit.sourceforge.net/ 00:14:53.614 00:14:53.614 00:14:53.614 Suite: memory 00:14:53.614 Test: alloc and free memory map ...[2024-07-22 16:52:55.135064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:53.614 passed 00:14:53.614 Test: mem map translation ...[2024-07-22 16:52:55.195615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:53.614 [2024-07-22 16:52:55.195723] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:53.614 [2024-07-22 16:52:55.195828] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:53.614 [2024-07-22 16:52:55.195858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:53.873 passed 00:14:53.873 Test: mem map registration ...[2024-07-22 16:52:55.294075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:14:53.873 [2024-07-22 16:52:55.294178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:14:53.873 passed 00:14:53.873 Test: mem map adjacent registrations ...passed 00:14:53.873 00:14:53.873 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.873 suites 1 1 n/a 0 0 00:14:53.873 tests 4 4 4 0 0 00:14:53.873 asserts 152 152 152 0 n/a 00:14:53.873 00:14:53.873 Elapsed time = 0.322 seconds 00:14:53.873 00:14:53.873 real 0m0.363s 00:14:53.873 user 0m0.332s 00:14:53.873 sys 0m0.026s 00:14:53.873 16:52:55 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.873 16:52:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:53.873 ************************************ 00:14:53.873 END TEST env_memory 00:14:53.873 ************************************ 00:14:53.873 16:52:55 env -- common/autotest_common.sh@1142 -- # return 0 00:14:53.873 16:52:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:53.873 16:52:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.873 16:52:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.873 16:52:55 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.873 ************************************ 00:14:53.873 START TEST env_vtophys 00:14:53.873 ************************************ 00:14:53.873 16:52:55 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:54.131 EAL: lib.eal log level changed from notice to debug 00:14:54.131 EAL: Detected lcore 0 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 1 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 2 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 3 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 4 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 5 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 6 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 7 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 8 as core 0 on socket 0 00:14:54.131 EAL: Detected lcore 9 as core 0 on socket 0 00:14:54.131 EAL: Maximum logical cores by configuration: 128 00:14:54.131 EAL: Detected CPU lcores: 10 00:14:54.131 EAL: Detected NUMA nodes: 1 00:14:54.131 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:54.131 EAL: Detected shared linkage of DPDK 00:14:54.131 EAL: No shared files mode enabled, IPC will be disabled 00:14:54.131 EAL: Selected IOVA mode 'PA' 00:14:54.131 EAL: Probing VFIO support... 00:14:54.131 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:54.131 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:54.131 EAL: Ask a virtual area of 0x2e000 bytes 00:14:54.131 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:54.131 EAL: Setting up physically contiguous memory... 00:14:54.131 EAL: Setting maximum number of open files to 524288 00:14:54.132 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:54.132 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:14:54.132 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:14:54.132 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:54.132 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:14:54.132 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:14:54.132 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:54.132 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:14:54.132 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:14:54.132 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:54.132 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:14:54.132 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:14:54.132 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:54.132 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:54.132 EAL: Hugepages will be freed exactly as allocated. 00:14:54.132 EAL: No shared files mode enabled, IPC is disabled 00:14:54.132 EAL: No shared files mode enabled, IPC is disabled 00:14:54.132 EAL: TSC frequency is ~2200000 KHz 00:14:54.132 EAL: Main lcore 0 is ready (tid=7fba5b677a40;cpuset=[0]) 00:14:54.132 EAL: Trying to obtain current memory policy. 00:14:54.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.132 EAL: Restoring previous memory policy: 0 00:14:54.132 EAL: request: mp_malloc_sync 00:14:54.132 EAL: No shared files mode enabled, IPC is disabled 00:14:54.132 EAL: Heap on socket 0 was expanded by 2MB 00:14:54.132 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:54.132 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:54.132 EAL: Mem event callback 'spdk:(nil)' registered 00:14:54.132 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:54.132 00:14:54.132 00:14:54.132 CUnit - A unit testing framework for C - Version 2.1-3 00:14:54.132 http://cunit.sourceforge.net/ 00:14:54.132 00:14:54.132 00:14:54.132 Suite: components_suite 00:14:54.700 Test: vtophys_malloc_test ...passed 00:14:54.700 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:54.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.700 EAL: Restoring previous memory policy: 4 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was expanded by 4MB 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was shrunk by 4MB 00:14:54.700 EAL: Trying to obtain current memory policy. 00:14:54.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.700 EAL: Restoring previous memory policy: 4 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was expanded by 6MB 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was shrunk by 6MB 00:14:54.700 EAL: Trying to obtain current memory policy. 00:14:54.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.700 EAL: Restoring previous memory policy: 4 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was expanded by 10MB 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was shrunk by 10MB 00:14:54.700 EAL: Trying to obtain current memory policy. 00:14:54.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.700 EAL: Restoring previous memory policy: 4 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was expanded by 18MB 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was shrunk by 18MB 00:14:54.700 EAL: Trying to obtain current memory policy. 00:14:54.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.700 EAL: Restoring previous memory policy: 4 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was expanded by 34MB 00:14:54.700 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.700 EAL: request: mp_malloc_sync 00:14:54.700 EAL: No shared files mode enabled, IPC is disabled 00:14:54.700 EAL: Heap on socket 0 was shrunk by 34MB 00:14:54.959 EAL: Trying to obtain current memory policy. 00:14:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.959 EAL: Restoring previous memory policy: 4 00:14:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.959 EAL: request: mp_malloc_sync 00:14:54.959 EAL: No shared files mode enabled, IPC is disabled 00:14:54.959 EAL: Heap on socket 0 was expanded by 66MB 00:14:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.959 EAL: request: mp_malloc_sync 00:14:54.959 EAL: No shared files mode enabled, IPC is disabled 00:14:54.959 EAL: Heap on socket 0 was shrunk by 66MB 00:14:54.959 EAL: Trying to obtain current memory policy. 00:14:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:55.217 EAL: Restoring previous memory policy: 4 00:14:55.217 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.217 EAL: request: mp_malloc_sync 00:14:55.217 EAL: No shared files mode enabled, IPC is disabled 00:14:55.217 EAL: Heap on socket 0 was expanded by 130MB 00:14:55.217 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.475 EAL: request: mp_malloc_sync 00:14:55.475 EAL: No shared files mode enabled, IPC is disabled 00:14:55.475 EAL: Heap on socket 0 was shrunk by 130MB 00:14:55.475 EAL: Trying to obtain current memory policy. 00:14:55.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:55.734 EAL: Restoring previous memory policy: 4 00:14:55.734 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.734 EAL: request: mp_malloc_sync 00:14:55.734 EAL: No shared files mode enabled, IPC is disabled 00:14:55.734 EAL: Heap on socket 0 was expanded by 258MB 00:14:55.991 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.991 EAL: request: mp_malloc_sync 00:14:55.991 EAL: No shared files mode enabled, IPC is disabled 00:14:55.991 EAL: Heap on socket 0 was shrunk by 258MB 00:14:56.563 EAL: Trying to obtain current memory policy. 00:14:56.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:56.563 EAL: Restoring previous memory policy: 4 00:14:56.563 EAL: Calling mem event callback 'spdk:(nil)' 00:14:56.563 EAL: request: mp_malloc_sync 00:14:56.563 EAL: No shared files mode enabled, IPC is disabled 00:14:56.563 EAL: Heap on socket 0 was expanded by 514MB 00:14:57.498 EAL: Calling mem event callback 'spdk:(nil)' 00:14:57.498 EAL: request: mp_malloc_sync 00:14:57.498 EAL: No shared files mode enabled, IPC is disabled 00:14:57.498 EAL: Heap on socket 0 was shrunk by 514MB 00:14:58.431 EAL: Trying to obtain current memory policy. 00:14:58.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:58.431 EAL: Restoring previous memory policy: 4 00:14:58.431 EAL: Calling mem event callback 'spdk:(nil)' 00:14:58.431 EAL: request: mp_malloc_sync 00:14:58.431 EAL: No shared files mode enabled, IPC is disabled 00:14:58.431 EAL: Heap on socket 0 was expanded by 1026MB 00:15:00.331 EAL: Calling mem event callback 'spdk:(nil)' 00:15:00.331 EAL: request: mp_malloc_sync 00:15:00.331 EAL: No shared files mode enabled, IPC is disabled 00:15:00.331 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:02.229 passed 00:15:02.229 00:15:02.229 Run Summary: Type Total Ran Passed Failed Inactive 00:15:02.229 suites 1 1 n/a 0 0 00:15:02.229 tests 2 2 2 0 0 00:15:02.229 asserts 5306 5306 5306 0 n/a 00:15:02.229 00:15:02.229 Elapsed time = 7.580 seconds 00:15:02.229 EAL: Calling mem event callback 'spdk:(nil)' 00:15:02.229 EAL: request: mp_malloc_sync 00:15:02.229 EAL: No shared files mode enabled, IPC is disabled 00:15:02.229 EAL: Heap on socket 0 was shrunk by 2MB 00:15:02.229 EAL: No shared files mode enabled, IPC is disabled 00:15:02.229 EAL: No shared files mode enabled, IPC is disabled 00:15:02.229 EAL: No shared files mode enabled, IPC is disabled 00:15:02.229 00:15:02.229 real 0m7.901s 00:15:02.229 user 0m6.722s 00:15:02.229 sys 0m1.013s 00:15:02.229 16:53:03 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.229 16:53:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:02.229 ************************************ 00:15:02.229 END TEST env_vtophys 00:15:02.229 ************************************ 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1142 -- # return 0 00:15:02.229 16:53:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.229 16:53:03 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.229 ************************************ 00:15:02.229 START TEST env_pci 00:15:02.229 ************************************ 00:15:02.229 16:53:03 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:02.229 00:15:02.229 00:15:02.229 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.229 http://cunit.sourceforge.net/ 00:15:02.229 00:15:02.229 00:15:02.229 Suite: pci 00:15:02.229 Test: pci_hook ...[2024-07-22 16:53:03.471537] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58941 has claimed it 00:15:02.229 passed 00:15:02.229 00:15:02.229 Run Summary: Type Total Ran Passed Failed Inactive 00:15:02.229 suites 1 1 n/a 0 0 00:15:02.229 tests 1 1 1 0 0 00:15:02.229 asserts 25 25 25 0 n/a 00:15:02.229 00:15:02.229 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:15:02.229 EAL: Failed to attach device on primary process 00:15:02.229 00:15:02.229 00:15:02.229 real 0m0.078s 00:15:02.229 user 0m0.036s 00:15:02.229 sys 0m0.041s 00:15:02.229 16:53:03 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.229 16:53:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:02.229 ************************************ 00:15:02.229 END TEST env_pci 00:15:02.229 ************************************ 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1142 -- # return 0 00:15:02.229 16:53:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:02.229 16:53:03 env -- env/env.sh@15 -- # uname 00:15:02.229 16:53:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:02.229 16:53:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:02.229 16:53:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:02.229 16:53:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.229 16:53:03 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.229 ************************************ 00:15:02.229 START TEST env_dpdk_post_init 00:15:02.229 ************************************ 00:15:02.229 16:53:03 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:02.229 EAL: Detected CPU lcores: 10 00:15:02.229 EAL: Detected NUMA nodes: 1 00:15:02.229 EAL: Detected shared linkage of DPDK 00:15:02.229 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:02.229 EAL: Selected IOVA mode 'PA' 00:15:02.229 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:02.229 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:02.229 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:02.229 Starting DPDK initialization... 00:15:02.229 Starting SPDK post initialization... 00:15:02.229 SPDK NVMe probe 00:15:02.229 Attaching to 0000:00:10.0 00:15:02.229 Attaching to 0000:00:11.0 00:15:02.229 Attached to 0000:00:10.0 00:15:02.229 Attached to 0000:00:11.0 00:15:02.229 Cleaning up... 00:15:02.488 00:15:02.488 real 0m0.279s 00:15:02.488 user 0m0.078s 00:15:02.488 sys 0m0.101s 00:15:02.488 16:53:03 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.488 16:53:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 ************************************ 00:15:02.488 END TEST env_dpdk_post_init 00:15:02.488 ************************************ 00:15:02.488 16:53:03 env -- common/autotest_common.sh@1142 -- # return 0 00:15:02.489 16:53:03 env -- env/env.sh@26 -- # uname 00:15:02.489 16:53:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:02.489 16:53:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:02.489 16:53:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:02.489 16:53:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.489 16:53:03 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.489 ************************************ 00:15:02.489 START TEST env_mem_callbacks 00:15:02.489 ************************************ 00:15:02.489 16:53:03 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:02.489 EAL: Detected CPU lcores: 10 00:15:02.489 EAL: Detected NUMA nodes: 1 00:15:02.489 EAL: Detected shared linkage of DPDK 00:15:02.489 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:02.489 EAL: Selected IOVA mode 'PA' 00:15:02.489 00:15:02.489 00:15:02.489 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.489 http://cunit.sourceforge.net/ 00:15:02.489 00:15:02.489 00:15:02.489 Suite: memory 00:15:02.489 Test: test ... 00:15:02.489 register 0x200000200000 2097152 00:15:02.489 malloc 3145728 00:15:02.489 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:02.489 register 0x200000400000 4194304 00:15:02.489 buf 0x2000004fffc0 len 3145728 PASSED 00:15:02.489 malloc 64 00:15:02.489 buf 0x2000004ffec0 len 64 PASSED 00:15:02.489 malloc 4194304 00:15:02.489 register 0x200000800000 6291456 00:15:02.489 buf 0x2000009fffc0 len 4194304 PASSED 00:15:02.489 free 0x2000004fffc0 3145728 00:15:02.489 free 0x2000004ffec0 64 00:15:02.489 unregister 0x200000400000 4194304 PASSED 00:15:02.747 free 0x2000009fffc0 4194304 00:15:02.747 unregister 0x200000800000 6291456 PASSED 00:15:02.747 malloc 8388608 00:15:02.747 register 0x200000400000 10485760 00:15:02.747 buf 0x2000005fffc0 len 8388608 PASSED 00:15:02.747 free 0x2000005fffc0 8388608 00:15:02.747 unregister 0x200000400000 10485760 PASSED 00:15:02.747 passed 00:15:02.747 00:15:02.747 Run Summary: Type Total Ran Passed Failed Inactive 00:15:02.747 suites 1 1 n/a 0 0 00:15:02.747 tests 1 1 1 0 0 00:15:02.747 asserts 15 15 15 0 n/a 00:15:02.747 00:15:02.747 Elapsed time = 0.065 seconds 00:15:02.747 00:15:02.747 real 0m0.256s 00:15:02.747 user 0m0.092s 00:15:02.747 sys 0m0.063s 00:15:02.747 16:53:04 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.747 16:53:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 ************************************ 00:15:02.747 END TEST env_mem_callbacks 00:15:02.747 ************************************ 00:15:02.747 16:53:04 env -- common/autotest_common.sh@1142 -- # return 0 00:15:02.747 00:15:02.747 real 0m9.229s 00:15:02.747 user 0m7.355s 00:15:02.747 sys 0m1.467s 00:15:02.747 16:53:04 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.747 16:53:04 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 ************************************ 00:15:02.747 END TEST env 00:15:02.747 ************************************ 00:15:02.747 16:53:04 -- common/autotest_common.sh@1142 -- # return 0 00:15:02.747 16:53:04 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:02.747 16:53:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:02.747 16:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.747 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 ************************************ 00:15:02.747 START TEST rpc 00:15:02.747 ************************************ 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:02.747 * Looking for test storage... 00:15:02.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:02.747 16:53:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59060 00:15:02.747 16:53:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:02.747 16:53:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:02.747 16:53:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59060 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@829 -- # '[' -z 59060 ']' 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.747 16:53:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.005 [2024-07-22 16:53:04.516035] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:03.006 [2024-07-22 16:53:04.516222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:15:03.263 [2024-07-22 16:53:04.678271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.521 [2024-07-22 16:53:04.973178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:03.521 [2024-07-22 16:53:04.973246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59060' to capture a snapshot of events at runtime. 00:15:03.521 [2024-07-22 16:53:04.973267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.521 [2024-07-22 16:53:04.973294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.521 [2024-07-22 16:53:04.973310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59060 for offline analysis/debug. 00:15:03.521 [2024-07-22 16:53:04.973354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.455 16:53:05 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.455 16:53:05 rpc -- common/autotest_common.sh@862 -- # return 0 00:15:04.455 16:53:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:04.455 16:53:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:04.455 16:53:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:04.455 16:53:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:04.455 16:53:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:04.455 16:53:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.455 16:53:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 ************************************ 00:15:04.455 START TEST rpc_integrity 00:15:04.455 ************************************ 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:04.455 { 00:15:04.455 "name": "Malloc0", 00:15:04.455 "aliases": [ 00:15:04.455 "94ed9d6d-c588-48fd-b536-69c80336e431" 00:15:04.455 ], 00:15:04.455 "product_name": "Malloc disk", 00:15:04.455 "block_size": 512, 00:15:04.455 "num_blocks": 16384, 00:15:04.455 "uuid": "94ed9d6d-c588-48fd-b536-69c80336e431", 00:15:04.455 "assigned_rate_limits": { 00:15:04.455 "rw_ios_per_sec": 0, 00:15:04.455 "rw_mbytes_per_sec": 0, 00:15:04.455 "r_mbytes_per_sec": 0, 00:15:04.455 "w_mbytes_per_sec": 0 00:15:04.455 }, 00:15:04.455 "claimed": false, 00:15:04.455 "zoned": false, 00:15:04.455 "supported_io_types": { 00:15:04.455 "read": true, 00:15:04.455 "write": true, 00:15:04.455 "unmap": true, 00:15:04.455 "flush": true, 00:15:04.455 "reset": true, 00:15:04.455 "nvme_admin": false, 00:15:04.455 "nvme_io": false, 00:15:04.455 "nvme_io_md": false, 00:15:04.455 "write_zeroes": true, 00:15:04.455 "zcopy": true, 00:15:04.455 "get_zone_info": false, 00:15:04.455 "zone_management": false, 00:15:04.455 "zone_append": false, 00:15:04.455 "compare": false, 00:15:04.455 "compare_and_write": false, 00:15:04.455 "abort": true, 00:15:04.455 "seek_hole": false, 00:15:04.455 "seek_data": false, 00:15:04.455 "copy": true, 00:15:04.455 "nvme_iov_md": false 00:15:04.455 }, 00:15:04.455 "memory_domains": [ 00:15:04.455 { 00:15:04.455 "dma_device_id": "system", 00:15:04.455 "dma_device_type": 1 00:15:04.455 }, 00:15:04.455 { 00:15:04.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.455 "dma_device_type": 2 00:15:04.455 } 00:15:04.455 ], 00:15:04.455 "driver_specific": {} 00:15:04.455 } 00:15:04.455 ]' 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 [2024-07-22 16:53:05.941721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:04.455 [2024-07-22 16:53:05.941792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.455 [2024-07-22 16:53:05.941833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:04.455 [2024-07-22 16:53:05.941850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.455 [2024-07-22 16:53:05.944728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.455 [2024-07-22 16:53:05.944780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:04.455 Passthru0 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 16:53:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.455 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:04.455 { 00:15:04.455 "name": "Malloc0", 00:15:04.455 "aliases": [ 00:15:04.455 "94ed9d6d-c588-48fd-b536-69c80336e431" 00:15:04.455 ], 00:15:04.455 "product_name": "Malloc disk", 00:15:04.455 "block_size": 512, 00:15:04.455 "num_blocks": 16384, 00:15:04.455 "uuid": "94ed9d6d-c588-48fd-b536-69c80336e431", 00:15:04.455 "assigned_rate_limits": { 00:15:04.455 "rw_ios_per_sec": 0, 00:15:04.455 "rw_mbytes_per_sec": 0, 00:15:04.455 "r_mbytes_per_sec": 0, 00:15:04.455 "w_mbytes_per_sec": 0 00:15:04.455 }, 00:15:04.455 "claimed": true, 00:15:04.455 "claim_type": "exclusive_write", 00:15:04.455 "zoned": false, 00:15:04.455 "supported_io_types": { 00:15:04.455 "read": true, 00:15:04.455 "write": true, 00:15:04.455 "unmap": true, 00:15:04.455 "flush": true, 00:15:04.455 "reset": true, 00:15:04.455 "nvme_admin": false, 00:15:04.455 "nvme_io": false, 00:15:04.455 "nvme_io_md": false, 00:15:04.455 "write_zeroes": true, 00:15:04.455 "zcopy": true, 00:15:04.455 "get_zone_info": false, 00:15:04.455 "zone_management": false, 00:15:04.456 "zone_append": false, 00:15:04.456 "compare": false, 00:15:04.456 "compare_and_write": false, 00:15:04.456 "abort": true, 00:15:04.456 "seek_hole": false, 00:15:04.456 "seek_data": false, 00:15:04.456 "copy": true, 00:15:04.456 "nvme_iov_md": false 00:15:04.456 }, 00:15:04.456 "memory_domains": [ 00:15:04.456 { 00:15:04.456 "dma_device_id": "system", 00:15:04.456 "dma_device_type": 1 00:15:04.456 }, 00:15:04.456 { 00:15:04.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.456 "dma_device_type": 2 00:15:04.456 } 00:15:04.456 ], 00:15:04.456 "driver_specific": {} 00:15:04.456 }, 00:15:04.456 { 00:15:04.456 "name": "Passthru0", 00:15:04.456 "aliases": [ 00:15:04.456 "48ddafaa-06a4-5c31-bb1d-6623ec98f17a" 00:15:04.456 ], 00:15:04.456 "product_name": "passthru", 00:15:04.456 "block_size": 512, 00:15:04.456 "num_blocks": 16384, 00:15:04.456 "uuid": "48ddafaa-06a4-5c31-bb1d-6623ec98f17a", 00:15:04.456 "assigned_rate_limits": { 00:15:04.456 "rw_ios_per_sec": 0, 00:15:04.456 "rw_mbytes_per_sec": 0, 00:15:04.456 "r_mbytes_per_sec": 0, 00:15:04.456 "w_mbytes_per_sec": 0 00:15:04.456 }, 00:15:04.456 "claimed": false, 00:15:04.456 "zoned": false, 00:15:04.456 "supported_io_types": { 00:15:04.456 "read": true, 00:15:04.456 "write": true, 00:15:04.456 "unmap": true, 00:15:04.456 "flush": true, 00:15:04.456 "reset": true, 00:15:04.456 "nvme_admin": false, 00:15:04.456 "nvme_io": false, 00:15:04.456 "nvme_io_md": false, 00:15:04.456 "write_zeroes": true, 00:15:04.456 "zcopy": true, 00:15:04.456 "get_zone_info": false, 00:15:04.456 "zone_management": false, 00:15:04.456 "zone_append": false, 00:15:04.456 "compare": false, 00:15:04.456 "compare_and_write": false, 00:15:04.456 "abort": true, 00:15:04.456 "seek_hole": false, 00:15:04.456 "seek_data": false, 00:15:04.456 "copy": true, 00:15:04.456 "nvme_iov_md": false 00:15:04.456 }, 00:15:04.456 "memory_domains": [ 00:15:04.456 { 00:15:04.456 "dma_device_id": "system", 00:15:04.456 "dma_device_type": 1 00:15:04.456 }, 00:15:04.456 { 00:15:04.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.456 "dma_device_type": 2 00:15:04.456 } 00:15:04.456 ], 00:15:04.456 "driver_specific": { 00:15:04.456 "passthru": { 00:15:04.456 "name": "Passthru0", 00:15:04.456 "base_bdev_name": "Malloc0" 00:15:04.456 } 00:15:04.456 } 00:15:04.456 } 00:15:04.456 ]' 00:15:04.456 16:53:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:04.456 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:04.456 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:04.456 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.456 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.456 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.456 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:04.456 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.456 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.714 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.714 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:04.714 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:04.714 16:53:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:04.714 00:15:04.714 real 0m0.362s 00:15:04.714 user 0m0.218s 00:15:04.714 sys 0m0.046s 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.714 16:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.714 ************************************ 00:15:04.714 END TEST rpc_integrity 00:15:04.714 ************************************ 00:15:04.714 16:53:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:04.714 16:53:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:04.714 16:53:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:04.714 16:53:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.714 16:53:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.714 ************************************ 00:15:04.714 START TEST rpc_plugins 00:15:04.714 ************************************ 00:15:04.714 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:15:04.714 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:04.715 { 00:15:04.715 "name": "Malloc1", 00:15:04.715 "aliases": [ 00:15:04.715 "39ae5e53-7bf8-4210-98e4-502196da14d4" 00:15:04.715 ], 00:15:04.715 "product_name": "Malloc disk", 00:15:04.715 "block_size": 4096, 00:15:04.715 "num_blocks": 256, 00:15:04.715 "uuid": "39ae5e53-7bf8-4210-98e4-502196da14d4", 00:15:04.715 "assigned_rate_limits": { 00:15:04.715 "rw_ios_per_sec": 0, 00:15:04.715 "rw_mbytes_per_sec": 0, 00:15:04.715 "r_mbytes_per_sec": 0, 00:15:04.715 "w_mbytes_per_sec": 0 00:15:04.715 }, 00:15:04.715 "claimed": false, 00:15:04.715 "zoned": false, 00:15:04.715 "supported_io_types": { 00:15:04.715 "read": true, 00:15:04.715 "write": true, 00:15:04.715 "unmap": true, 00:15:04.715 "flush": true, 00:15:04.715 "reset": true, 00:15:04.715 "nvme_admin": false, 00:15:04.715 "nvme_io": false, 00:15:04.715 "nvme_io_md": false, 00:15:04.715 "write_zeroes": true, 00:15:04.715 "zcopy": true, 00:15:04.715 "get_zone_info": false, 00:15:04.715 "zone_management": false, 00:15:04.715 "zone_append": false, 00:15:04.715 "compare": false, 00:15:04.715 "compare_and_write": false, 00:15:04.715 "abort": true, 00:15:04.715 "seek_hole": false, 00:15:04.715 "seek_data": false, 00:15:04.715 "copy": true, 00:15:04.715 "nvme_iov_md": false 00:15:04.715 }, 00:15:04.715 "memory_domains": [ 00:15:04.715 { 00:15:04.715 "dma_device_id": "system", 00:15:04.715 "dma_device_type": 1 00:15:04.715 }, 00:15:04.715 { 00:15:04.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.715 "dma_device_type": 2 00:15:04.715 } 00:15:04.715 ], 00:15:04.715 "driver_specific": {} 00:15:04.715 } 00:15:04.715 ]' 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.715 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:04.715 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:04.973 16:53:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:04.973 00:15:04.973 real 0m0.152s 00:15:04.973 user 0m0.098s 00:15:04.973 sys 0m0.017s 00:15:04.973 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.973 16:53:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.973 ************************************ 00:15:04.973 END TEST rpc_plugins 00:15:04.973 ************************************ 00:15:04.973 16:53:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:04.973 16:53:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:04.973 16:53:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:04.973 16:53:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.973 16:53:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.973 ************************************ 00:15:04.973 START TEST rpc_trace_cmd_test 00:15:04.973 ************************************ 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:04.973 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59060", 00:15:04.973 "tpoint_group_mask": "0x8", 00:15:04.973 "iscsi_conn": { 00:15:04.973 "mask": "0x2", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "scsi": { 00:15:04.973 "mask": "0x4", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "bdev": { 00:15:04.973 "mask": "0x8", 00:15:04.973 "tpoint_mask": "0xffffffffffffffff" 00:15:04.973 }, 00:15:04.973 "nvmf_rdma": { 00:15:04.973 "mask": "0x10", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "nvmf_tcp": { 00:15:04.973 "mask": "0x20", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "ftl": { 00:15:04.973 "mask": "0x40", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "blobfs": { 00:15:04.973 "mask": "0x80", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "dsa": { 00:15:04.973 "mask": "0x200", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "thread": { 00:15:04.973 "mask": "0x400", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "nvme_pcie": { 00:15:04.973 "mask": "0x800", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "iaa": { 00:15:04.973 "mask": "0x1000", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "nvme_tcp": { 00:15:04.973 "mask": "0x2000", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "bdev_nvme": { 00:15:04.973 "mask": "0x4000", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 }, 00:15:04.973 "sock": { 00:15:04.973 "mask": "0x8000", 00:15:04.973 "tpoint_mask": "0x0" 00:15:04.973 } 00:15:04.973 }' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:04.973 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:05.232 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:05.232 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:05.232 16:53:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:05.232 00:15:05.232 real 0m0.267s 00:15:05.232 user 0m0.228s 00:15:05.232 sys 0m0.032s 00:15:05.232 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.232 ************************************ 00:15:05.232 16:53:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.232 END TEST rpc_trace_cmd_test 00:15:05.232 ************************************ 00:15:05.232 16:53:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:05.232 16:53:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:05.232 16:53:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:05.232 16:53:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:05.232 16:53:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:05.232 16:53:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.232 16:53:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.232 ************************************ 00:15:05.232 START TEST rpc_daemon_integrity 00:15:05.232 ************************************ 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:05.232 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.233 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.233 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.233 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:05.233 { 00:15:05.233 "name": "Malloc2", 00:15:05.233 "aliases": [ 00:15:05.233 "b8063be2-bfce-4047-a3fa-0e371bc5409c" 00:15:05.233 ], 00:15:05.233 "product_name": "Malloc disk", 00:15:05.233 "block_size": 512, 00:15:05.233 "num_blocks": 16384, 00:15:05.233 "uuid": "b8063be2-bfce-4047-a3fa-0e371bc5409c", 00:15:05.233 "assigned_rate_limits": { 00:15:05.233 "rw_ios_per_sec": 0, 00:15:05.233 "rw_mbytes_per_sec": 0, 00:15:05.233 "r_mbytes_per_sec": 0, 00:15:05.233 "w_mbytes_per_sec": 0 00:15:05.233 }, 00:15:05.233 "claimed": false, 00:15:05.233 "zoned": false, 00:15:05.233 "supported_io_types": { 00:15:05.233 "read": true, 00:15:05.233 "write": true, 00:15:05.233 "unmap": true, 00:15:05.233 "flush": true, 00:15:05.233 "reset": true, 00:15:05.233 "nvme_admin": false, 00:15:05.233 "nvme_io": false, 00:15:05.233 "nvme_io_md": false, 00:15:05.233 "write_zeroes": true, 00:15:05.233 "zcopy": true, 00:15:05.233 "get_zone_info": false, 00:15:05.233 "zone_management": false, 00:15:05.233 "zone_append": false, 00:15:05.233 "compare": false, 00:15:05.233 "compare_and_write": false, 00:15:05.233 "abort": true, 00:15:05.233 "seek_hole": false, 00:15:05.233 "seek_data": false, 00:15:05.233 "copy": true, 00:15:05.233 "nvme_iov_md": false 00:15:05.233 }, 00:15:05.233 "memory_domains": [ 00:15:05.233 { 00:15:05.233 "dma_device_id": "system", 00:15:05.233 "dma_device_type": 1 00:15:05.233 }, 00:15:05.233 { 00:15:05.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.233 "dma_device_type": 2 00:15:05.233 } 00:15:05.233 ], 00:15:05.233 "driver_specific": {} 00:15:05.233 } 00:15:05.233 ]' 00:15:05.233 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.491 [2024-07-22 16:53:06.895447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:05.491 [2024-07-22 16:53:06.895527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.491 [2024-07-22 16:53:06.895567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:05.491 [2024-07-22 16:53:06.895582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.491 [2024-07-22 16:53:06.898520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.491 [2024-07-22 16:53:06.898564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:05.491 Passthru0 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:05.491 { 00:15:05.491 "name": "Malloc2", 00:15:05.491 "aliases": [ 00:15:05.491 "b8063be2-bfce-4047-a3fa-0e371bc5409c" 00:15:05.491 ], 00:15:05.491 "product_name": "Malloc disk", 00:15:05.491 "block_size": 512, 00:15:05.491 "num_blocks": 16384, 00:15:05.491 "uuid": "b8063be2-bfce-4047-a3fa-0e371bc5409c", 00:15:05.491 "assigned_rate_limits": { 00:15:05.491 "rw_ios_per_sec": 0, 00:15:05.491 "rw_mbytes_per_sec": 0, 00:15:05.491 "r_mbytes_per_sec": 0, 00:15:05.491 "w_mbytes_per_sec": 0 00:15:05.491 }, 00:15:05.491 "claimed": true, 00:15:05.491 "claim_type": "exclusive_write", 00:15:05.491 "zoned": false, 00:15:05.491 "supported_io_types": { 00:15:05.491 "read": true, 00:15:05.491 "write": true, 00:15:05.491 "unmap": true, 00:15:05.491 "flush": true, 00:15:05.491 "reset": true, 00:15:05.491 "nvme_admin": false, 00:15:05.491 "nvme_io": false, 00:15:05.491 "nvme_io_md": false, 00:15:05.491 "write_zeroes": true, 00:15:05.491 "zcopy": true, 00:15:05.491 "get_zone_info": false, 00:15:05.491 "zone_management": false, 00:15:05.491 "zone_append": false, 00:15:05.491 "compare": false, 00:15:05.491 "compare_and_write": false, 00:15:05.491 "abort": true, 00:15:05.491 "seek_hole": false, 00:15:05.491 "seek_data": false, 00:15:05.491 "copy": true, 00:15:05.491 "nvme_iov_md": false 00:15:05.491 }, 00:15:05.491 "memory_domains": [ 00:15:05.491 { 00:15:05.491 "dma_device_id": "system", 00:15:05.491 "dma_device_type": 1 00:15:05.491 }, 00:15:05.491 { 00:15:05.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.491 "dma_device_type": 2 00:15:05.491 } 00:15:05.491 ], 00:15:05.491 "driver_specific": {} 00:15:05.491 }, 00:15:05.491 { 00:15:05.491 "name": "Passthru0", 00:15:05.491 "aliases": [ 00:15:05.491 "1db5c33a-a7f3-58f3-b890-b3d267f867ef" 00:15:05.491 ], 00:15:05.491 "product_name": "passthru", 00:15:05.491 "block_size": 512, 00:15:05.491 "num_blocks": 16384, 00:15:05.491 "uuid": "1db5c33a-a7f3-58f3-b890-b3d267f867ef", 00:15:05.491 "assigned_rate_limits": { 00:15:05.491 "rw_ios_per_sec": 0, 00:15:05.491 "rw_mbytes_per_sec": 0, 00:15:05.491 "r_mbytes_per_sec": 0, 00:15:05.491 "w_mbytes_per_sec": 0 00:15:05.491 }, 00:15:05.491 "claimed": false, 00:15:05.491 "zoned": false, 00:15:05.491 "supported_io_types": { 00:15:05.491 "read": true, 00:15:05.491 "write": true, 00:15:05.491 "unmap": true, 00:15:05.491 "flush": true, 00:15:05.491 "reset": true, 00:15:05.491 "nvme_admin": false, 00:15:05.491 "nvme_io": false, 00:15:05.491 "nvme_io_md": false, 00:15:05.491 "write_zeroes": true, 00:15:05.491 "zcopy": true, 00:15:05.491 "get_zone_info": false, 00:15:05.491 "zone_management": false, 00:15:05.491 "zone_append": false, 00:15:05.491 "compare": false, 00:15:05.491 "compare_and_write": false, 00:15:05.491 "abort": true, 00:15:05.491 "seek_hole": false, 00:15:05.491 "seek_data": false, 00:15:05.491 "copy": true, 00:15:05.491 "nvme_iov_md": false 00:15:05.491 }, 00:15:05.491 "memory_domains": [ 00:15:05.491 { 00:15:05.491 "dma_device_id": "system", 00:15:05.491 "dma_device_type": 1 00:15:05.491 }, 00:15:05.491 { 00:15:05.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.491 "dma_device_type": 2 00:15:05.491 } 00:15:05.491 ], 00:15:05.491 "driver_specific": { 00:15:05.491 "passthru": { 00:15:05.491 "name": "Passthru0", 00:15:05.491 "base_bdev_name": "Malloc2" 00:15:05.491 } 00:15:05.491 } 00:15:05.491 } 00:15:05.491 ]' 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.491 16:53:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:05.491 16:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:05.491 00:15:05.491 real 0m0.365s 00:15:05.491 user 0m0.213s 00:15:05.492 sys 0m0.039s 00:15:05.492 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.492 16:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.492 ************************************ 00:15:05.492 END TEST rpc_daemon_integrity 00:15:05.492 ************************************ 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:05.749 16:53:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:05.749 16:53:07 rpc -- rpc/rpc.sh@84 -- # killprocess 59060 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@948 -- # '[' -z 59060 ']' 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@952 -- # kill -0 59060 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@953 -- # uname 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59060 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.749 killing process with pid 59060 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59060' 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@967 -- # kill 59060 00:15:05.749 16:53:07 rpc -- common/autotest_common.sh@972 -- # wait 59060 00:15:08.325 00:15:08.325 real 0m5.246s 00:15:08.325 user 0m5.890s 00:15:08.325 sys 0m0.872s 00:15:08.325 16:53:09 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.325 16:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.325 ************************************ 00:15:08.325 END TEST rpc 00:15:08.325 ************************************ 00:15:08.325 16:53:09 -- common/autotest_common.sh@1142 -- # return 0 00:15:08.325 16:53:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:08.325 16:53:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:08.325 16:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.325 16:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:08.325 ************************************ 00:15:08.325 START TEST skip_rpc 00:15:08.325 ************************************ 00:15:08.325 16:53:09 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:08.325 * Looking for test storage... 00:15:08.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:08.325 16:53:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:08.325 16:53:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:08.325 16:53:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:08.325 16:53:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:08.325 16:53:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.325 16:53:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.325 ************************************ 00:15:08.325 START TEST skip_rpc 00:15:08.325 ************************************ 00:15:08.325 16:53:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:15:08.325 16:53:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59281 00:15:08.325 16:53:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:08.325 16:53:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:08.325 16:53:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:08.325 [2024-07-22 16:53:09.809312] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:08.325 [2024-07-22 16:53:09.809532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:15:08.583 [2024-07-22 16:53:09.983796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.842 [2024-07-22 16:53:10.310379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59281 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59281 ']' 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59281 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59281 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.094 killing process with pid 59281 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59281' 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59281 00:15:13.094 16:53:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59281 00:15:15.624 00:15:15.624 real 0m7.377s 00:15:15.624 user 0m6.794s 00:15:15.624 sys 0m0.471s 00:15:15.624 16:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.624 16:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.624 ************************************ 00:15:15.624 END TEST skip_rpc 00:15:15.624 ************************************ 00:15:15.624 16:53:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:15.624 16:53:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:15.624 16:53:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:15.624 16:53:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.624 16:53:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.624 ************************************ 00:15:15.624 START TEST skip_rpc_with_json 00:15:15.624 ************************************ 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59385 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59385 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59385 ']' 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.624 16:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:15.624 [2024-07-22 16:53:17.211766] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:15.624 [2024-07-22 16:53:17.211946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:15:15.882 [2024-07-22 16:53:17.376624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.140 [2024-07-22 16:53:17.665764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.073 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.073 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:15:17.073 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:17.074 [2024-07-22 16:53:18.479036] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:17.074 request: 00:15:17.074 { 00:15:17.074 "trtype": "tcp", 00:15:17.074 "method": "nvmf_get_transports", 00:15:17.074 "req_id": 1 00:15:17.074 } 00:15:17.074 Got JSON-RPC error response 00:15:17.074 response: 00:15:17.074 { 00:15:17.074 "code": -19, 00:15:17.074 "message": "No such device" 00:15:17.074 } 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:17.074 [2024-07-22 16:53:18.491137] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.074 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:17.074 { 00:15:17.074 "subsystems": [ 00:15:17.074 { 00:15:17.074 "subsystem": "keyring", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "iobuf", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "iobuf_set_options", 00:15:17.074 "params": { 00:15:17.074 "small_pool_count": 8192, 00:15:17.074 "large_pool_count": 1024, 00:15:17.074 "small_bufsize": 8192, 00:15:17.074 "large_bufsize": 135168 00:15:17.074 } 00:15:17.074 } 00:15:17.074 ] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "sock", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "sock_set_default_impl", 00:15:17.074 "params": { 00:15:17.074 "impl_name": "posix" 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "sock_impl_set_options", 00:15:17.074 "params": { 00:15:17.074 "impl_name": "ssl", 00:15:17.074 "recv_buf_size": 4096, 00:15:17.074 "send_buf_size": 4096, 00:15:17.074 "enable_recv_pipe": true, 00:15:17.074 "enable_quickack": false, 00:15:17.074 "enable_placement_id": 0, 00:15:17.074 "enable_zerocopy_send_server": true, 00:15:17.074 "enable_zerocopy_send_client": false, 00:15:17.074 "zerocopy_threshold": 0, 00:15:17.074 "tls_version": 0, 00:15:17.074 "enable_ktls": false 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "sock_impl_set_options", 00:15:17.074 "params": { 00:15:17.074 "impl_name": "posix", 00:15:17.074 "recv_buf_size": 2097152, 00:15:17.074 "send_buf_size": 2097152, 00:15:17.074 "enable_recv_pipe": true, 00:15:17.074 "enable_quickack": false, 00:15:17.074 "enable_placement_id": 0, 00:15:17.074 "enable_zerocopy_send_server": true, 00:15:17.074 "enable_zerocopy_send_client": false, 00:15:17.074 "zerocopy_threshold": 0, 00:15:17.074 "tls_version": 0, 00:15:17.074 "enable_ktls": false 00:15:17.074 } 00:15:17.074 } 00:15:17.074 ] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "vmd", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "accel", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "accel_set_options", 00:15:17.074 "params": { 00:15:17.074 "small_cache_size": 128, 00:15:17.074 "large_cache_size": 16, 00:15:17.074 "task_count": 2048, 00:15:17.074 "sequence_count": 2048, 00:15:17.074 "buf_count": 2048 00:15:17.074 } 00:15:17.074 } 00:15:17.074 ] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "bdev", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "bdev_set_options", 00:15:17.074 "params": { 00:15:17.074 "bdev_io_pool_size": 65535, 00:15:17.074 "bdev_io_cache_size": 256, 00:15:17.074 "bdev_auto_examine": true, 00:15:17.074 "iobuf_small_cache_size": 128, 00:15:17.074 "iobuf_large_cache_size": 16 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "bdev_raid_set_options", 00:15:17.074 "params": { 00:15:17.074 "process_window_size_kb": 1024, 00:15:17.074 "process_max_bandwidth_mb_sec": 0 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "bdev_iscsi_set_options", 00:15:17.074 "params": { 00:15:17.074 "timeout_sec": 30 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "bdev_nvme_set_options", 00:15:17.074 "params": { 00:15:17.074 "action_on_timeout": "none", 00:15:17.074 "timeout_us": 0, 00:15:17.074 "timeout_admin_us": 0, 00:15:17.074 "keep_alive_timeout_ms": 10000, 00:15:17.074 "arbitration_burst": 0, 00:15:17.074 "low_priority_weight": 0, 00:15:17.074 "medium_priority_weight": 0, 00:15:17.074 "high_priority_weight": 0, 00:15:17.074 "nvme_adminq_poll_period_us": 10000, 00:15:17.074 "nvme_ioq_poll_period_us": 0, 00:15:17.074 "io_queue_requests": 0, 00:15:17.074 "delay_cmd_submit": true, 00:15:17.074 "transport_retry_count": 4, 00:15:17.074 "bdev_retry_count": 3, 00:15:17.074 "transport_ack_timeout": 0, 00:15:17.074 "ctrlr_loss_timeout_sec": 0, 00:15:17.074 "reconnect_delay_sec": 0, 00:15:17.074 "fast_io_fail_timeout_sec": 0, 00:15:17.074 "disable_auto_failback": false, 00:15:17.074 "generate_uuids": false, 00:15:17.074 "transport_tos": 0, 00:15:17.074 "nvme_error_stat": false, 00:15:17.074 "rdma_srq_size": 0, 00:15:17.074 "io_path_stat": false, 00:15:17.074 "allow_accel_sequence": false, 00:15:17.074 "rdma_max_cq_size": 0, 00:15:17.074 "rdma_cm_event_timeout_ms": 0, 00:15:17.074 "dhchap_digests": [ 00:15:17.074 "sha256", 00:15:17.074 "sha384", 00:15:17.074 "sha512" 00:15:17.074 ], 00:15:17.074 "dhchap_dhgroups": [ 00:15:17.074 "null", 00:15:17.074 "ffdhe2048", 00:15:17.074 "ffdhe3072", 00:15:17.074 "ffdhe4096", 00:15:17.074 "ffdhe6144", 00:15:17.074 "ffdhe8192" 00:15:17.074 ] 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "bdev_nvme_set_hotplug", 00:15:17.074 "params": { 00:15:17.074 "period_us": 100000, 00:15:17.074 "enable": false 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "bdev_wait_for_examine" 00:15:17.074 } 00:15:17.074 ] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "scsi", 00:15:17.074 "config": null 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "scheduler", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "framework_set_scheduler", 00:15:17.074 "params": { 00:15:17.074 "name": "static" 00:15:17.074 } 00:15:17.074 } 00:15:17.074 ] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "vhost_scsi", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "vhost_blk", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "ublk", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "nbd", 00:15:17.074 "config": [] 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "subsystem": "nvmf", 00:15:17.074 "config": [ 00:15:17.074 { 00:15:17.074 "method": "nvmf_set_config", 00:15:17.074 "params": { 00:15:17.074 "discovery_filter": "match_any", 00:15:17.074 "admin_cmd_passthru": { 00:15:17.074 "identify_ctrlr": false 00:15:17.074 } 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "nvmf_set_max_subsystems", 00:15:17.074 "params": { 00:15:17.074 "max_subsystems": 1024 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "nvmf_set_crdt", 00:15:17.074 "params": { 00:15:17.074 "crdt1": 0, 00:15:17.074 "crdt2": 0, 00:15:17.074 "crdt3": 0 00:15:17.074 } 00:15:17.074 }, 00:15:17.074 { 00:15:17.074 "method": "nvmf_create_transport", 00:15:17.074 "params": { 00:15:17.074 "trtype": "TCP", 00:15:17.074 "max_queue_depth": 128, 00:15:17.074 "max_io_qpairs_per_ctrlr": 127, 00:15:17.074 "in_capsule_data_size": 4096, 00:15:17.074 "max_io_size": 131072, 00:15:17.074 "io_unit_size": 131072, 00:15:17.074 "max_aq_depth": 128, 00:15:17.075 "num_shared_buffers": 511, 00:15:17.075 "buf_cache_size": 4294967295, 00:15:17.075 "dif_insert_or_strip": false, 00:15:17.075 "zcopy": false, 00:15:17.075 "c2h_success": true, 00:15:17.075 "sock_priority": 0, 00:15:17.075 "abort_timeout_sec": 1, 00:15:17.075 "ack_timeout": 0, 00:15:17.075 "data_wr_pool_size": 0 00:15:17.075 } 00:15:17.075 } 00:15:17.075 ] 00:15:17.075 }, 00:15:17.075 { 00:15:17.075 "subsystem": "iscsi", 00:15:17.075 "config": [ 00:15:17.075 { 00:15:17.075 "method": "iscsi_set_options", 00:15:17.075 "params": { 00:15:17.075 "node_base": "iqn.2016-06.io.spdk", 00:15:17.075 "max_sessions": 128, 00:15:17.075 "max_connections_per_session": 2, 00:15:17.075 "max_queue_depth": 64, 00:15:17.075 "default_time2wait": 2, 00:15:17.075 "default_time2retain": 20, 00:15:17.075 "first_burst_length": 8192, 00:15:17.075 "immediate_data": true, 00:15:17.075 "allow_duplicated_isid": false, 00:15:17.075 "error_recovery_level": 0, 00:15:17.075 "nop_timeout": 60, 00:15:17.075 "nop_in_interval": 30, 00:15:17.075 "disable_chap": false, 00:15:17.075 "require_chap": false, 00:15:17.075 "mutual_chap": false, 00:15:17.075 "chap_group": 0, 00:15:17.075 "max_large_datain_per_connection": 64, 00:15:17.075 "max_r2t_per_connection": 4, 00:15:17.075 "pdu_pool_size": 36864, 00:15:17.075 "immediate_data_pool_size": 16384, 00:15:17.075 "data_out_pool_size": 2048 00:15:17.075 } 00:15:17.075 } 00:15:17.075 ] 00:15:17.075 } 00:15:17.075 ] 00:15:17.075 } 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59385 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59385 ']' 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59385 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59385 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.075 killing process with pid 59385 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59385' 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59385 00:15:17.075 16:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59385 00:15:19.602 16:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59441 00:15:19.602 16:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:19.602 16:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59441 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59441 ']' 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59441 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59441 00:15:24.867 killing process with pid 59441 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59441' 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59441 00:15:24.867 16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59441 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:27.398 00:15:27.398 real 0m11.425s 00:15:27.398 user 0m10.812s 00:15:27.398 sys 0m0.995s 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:27.398 ************************************ 00:15:27.398 END TEST skip_rpc_with_json 00:15:27.398 ************************************ 00:15:27.398 16:53:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:27.398 16:53:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:27.398 16:53:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:27.398 16:53:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.398 16:53:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.398 ************************************ 00:15:27.398 START TEST skip_rpc_with_delay 00:15:27.398 ************************************ 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.398 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:27.399 [2024-07-22 16:53:28.720183] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:27.399 [2024-07-22 16:53:28.720479] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.399 00:15:27.399 real 0m0.238s 00:15:27.399 user 0m0.141s 00:15:27.399 sys 0m0.093s 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.399 16:53:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:27.399 ************************************ 00:15:27.399 END TEST skip_rpc_with_delay 00:15:27.399 ************************************ 00:15:27.399 16:53:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:27.399 16:53:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:27.399 16:53:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:27.399 16:53:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:27.399 16:53:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:27.399 16:53:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.399 16:53:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.399 ************************************ 00:15:27.399 START TEST exit_on_failed_rpc_init 00:15:27.399 ************************************ 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59580 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59580 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59580 ']' 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.399 16:53:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:27.399 [2024-07-22 16:53:29.007996] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:27.399 [2024-07-22 16:53:29.008216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:15:27.657 [2024-07-22 16:53:29.181483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.915 [2024-07-22 16:53:29.488784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:28.911 16:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:28.911 [2024-07-22 16:53:30.460633] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:28.911 [2024-07-22 16:53:30.460879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:15:29.169 [2024-07-22 16:53:30.628447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.428 [2024-07-22 16:53:30.879325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.428 [2024-07-22 16:53:30.879457] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:29.428 [2024-07-22 16:53:30.879483] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:29.428 [2024-07-22 16:53:30.879498] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59580 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59580 ']' 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59580 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59580 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.995 killing process with pid 59580 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59580' 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59580 00:15:29.995 16:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59580 00:15:32.523 00:15:32.523 real 0m4.842s 00:15:32.523 user 0m5.516s 00:15:32.523 sys 0m0.724s 00:15:32.523 16:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.523 16:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:32.523 ************************************ 00:15:32.523 END TEST exit_on_failed_rpc_init 00:15:32.523 ************************************ 00:15:32.523 16:53:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:32.523 16:53:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:32.523 00:15:32.523 real 0m24.184s 00:15:32.523 user 0m23.362s 00:15:32.523 sys 0m2.477s 00:15:32.523 16:53:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.523 16:53:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.523 ************************************ 00:15:32.523 END TEST skip_rpc 00:15:32.523 ************************************ 00:15:32.523 16:53:33 -- common/autotest_common.sh@1142 -- # return 0 00:15:32.523 16:53:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:32.523 16:53:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:32.523 16:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.523 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.523 ************************************ 00:15:32.523 START TEST rpc_client 00:15:32.523 ************************************ 00:15:32.523 16:53:33 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:32.523 * Looking for test storage... 00:15:32.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:32.523 16:53:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:32.523 OK 00:15:32.523 16:53:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:32.523 ************************************ 00:15:32.523 END TEST rpc_client 00:15:32.523 ************************************ 00:15:32.523 00:15:32.523 real 0m0.153s 00:15:32.523 user 0m0.069s 00:15:32.523 sys 0m0.087s 00:15:32.523 16:53:33 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.523 16:53:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:32.523 16:53:33 -- common/autotest_common.sh@1142 -- # return 0 00:15:32.523 16:53:33 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:32.523 16:53:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:32.523 16:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.523 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.523 ************************************ 00:15:32.523 START TEST json_config 00:15:32.523 ************************************ 00:15:32.523 16:53:33 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:32.523 16:53:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.523 16:53:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:32.523 16:53:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.523 16:53:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.524 16:53:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.524 16:53:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.524 16:53:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.524 16:53:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.524 16:53:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.524 16:53:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.524 16:53:34 json_config -- paths/export.sh@5 -- # export PATH 00:15:32.524 16:53:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@47 -- # : 0 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.524 16:53:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:32.524 16:53:34 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:15:32.524 INFO: JSON configuration test init 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:32.524 Waiting for target to run... 00:15:32.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:32.524 16:53:34 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:15:32.524 16:53:34 json_config -- json_config/common.sh@9 -- # local app=target 00:15:32.524 16:53:34 json_config -- json_config/common.sh@10 -- # shift 00:15:32.524 16:53:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:32.524 16:53:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:32.524 16:53:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:32.524 16:53:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:32.524 16:53:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:32.524 16:53:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59752 00:15:32.524 16:53:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:32.524 16:53:34 json_config -- json_config/common.sh@25 -- # waitforlisten 59752 /var/tmp/spdk_tgt.sock 00:15:32.524 16:53:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 59752 ']' 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.524 16:53:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:32.783 [2024-07-22 16:53:34.240537] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:32.783 [2024-07-22 16:53:34.240765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:15:33.348 [2024-07-22 16:53:34.726474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.606 [2024-07-22 16:53:34.988464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.606 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:33.606 16:53:35 json_config -- json_config/common.sh@26 -- # echo '' 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.606 16:53:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:33.606 16:53:35 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:15:33.606 16:53:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:34.539 16:53:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.539 16:53:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:34.539 16:53:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:34.539 16:53:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:35.105 16:53:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:35.105 16:53:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:15:35.105 16:53:36 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:15:35.105 16:53:36 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@51 -- # sort 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:15:35.106 16:53:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.106 16:53:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@59 -- # return 0 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:15:35.106 16:53:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.106 16:53:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:35.106 16:53:36 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:15:35.106 16:53:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:15:35.364 MallocForIscsi0 00:15:35.364 16:53:36 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:15:35.364 16:53:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:15:35.622 16:53:37 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:15:35.622 16:53:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:15:35.879 16:53:37 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:15:35.879 16:53:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:15:36.137 16:53:37 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:15:36.137 16:53:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.137 16:53:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.137 16:53:37 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:15:36.137 16:53:37 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:15:36.137 16:53:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.137 16:53:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.137 16:53:37 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:15:36.137 16:53:37 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:36.137 16:53:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:36.395 MallocBdevForConfigChangeCheck 00:15:36.395 16:53:37 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:15:36.395 16:53:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.395 16:53:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.395 16:53:37 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:15:36.395 16:53:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:37.006 INFO: shutting down applications... 00:15:37.006 16:53:38 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:15:37.006 16:53:38 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:15:37.006 16:53:38 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:15:37.006 16:53:38 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:15:37.006 16:53:38 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:37.264 Calling clear_iscsi_subsystem 00:15:37.264 Calling clear_nvmf_subsystem 00:15:37.264 Calling clear_nbd_subsystem 00:15:37.264 Calling clear_ublk_subsystem 00:15:37.264 Calling clear_vhost_blk_subsystem 00:15:37.264 Calling clear_vhost_scsi_subsystem 00:15:37.264 Calling clear_bdev_subsystem 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@347 -- # count=100 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:37.264 16:53:38 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:37.523 16:53:39 json_config -- json_config/json_config.sh@349 -- # break 00:15:37.523 16:53:39 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:15:37.523 16:53:39 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:15:37.523 16:53:39 json_config -- json_config/common.sh@31 -- # local app=target 00:15:37.523 16:53:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:37.523 16:53:39 json_config -- json_config/common.sh@35 -- # [[ -n 59752 ]] 00:15:37.523 16:53:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59752 00:15:37.523 16:53:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:37.523 16:53:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:37.523 16:53:39 json_config -- json_config/common.sh@41 -- # kill -0 59752 00:15:37.523 16:53:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:15:38.089 16:53:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:15:38.089 16:53:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:38.089 16:53:39 json_config -- json_config/common.sh@41 -- # kill -0 59752 00:15:38.089 16:53:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:15:38.655 16:53:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:15:38.655 16:53:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:38.655 16:53:40 json_config -- json_config/common.sh@41 -- # kill -0 59752 00:15:38.655 SPDK target shutdown done 00:15:38.655 INFO: relaunching applications... 00:15:38.655 16:53:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:38.655 16:53:40 json_config -- json_config/common.sh@43 -- # break 00:15:38.655 16:53:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:38.655 16:53:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:38.655 16:53:40 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:15:38.655 16:53:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:38.655 16:53:40 json_config -- json_config/common.sh@9 -- # local app=target 00:15:38.655 16:53:40 json_config -- json_config/common.sh@10 -- # shift 00:15:38.655 16:53:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:38.655 16:53:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:38.655 16:53:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:38.655 16:53:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:38.655 16:53:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:38.655 16:53:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59947 00:15:38.655 16:53:40 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:38.655 16:53:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:38.655 Waiting for target to run... 00:15:38.655 16:53:40 json_config -- json_config/common.sh@25 -- # waitforlisten 59947 /var/tmp/spdk_tgt.sock 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 59947 ']' 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:38.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.655 16:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 [2024-07-22 16:53:40.254870] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:38.656 [2024-07-22 16:53:40.255331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:15:39.221 [2024-07-22 16:53:40.718191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.480 [2024-07-22 16:53:40.949797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.415 00:15:40.415 INFO: Checking if target configuration is the same... 00:15:40.415 16:53:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.415 16:53:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:40.415 16:53:41 json_config -- json_config/common.sh@26 -- # echo '' 00:15:40.415 16:53:41 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:15:40.415 16:53:41 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:40.415 16:53:41 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:40.415 16:53:41 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:15:40.415 16:53:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:40.415 + '[' 2 -ne 2 ']' 00:15:40.415 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:40.415 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:40.415 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:40.415 +++ basename /dev/fd/62 00:15:40.415 ++ mktemp /tmp/62.XXX 00:15:40.415 + tmp_file_1=/tmp/62.AfV 00:15:40.415 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:40.415 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:40.415 + tmp_file_2=/tmp/spdk_tgt_config.json.vUo 00:15:40.415 + ret=0 00:15:40.415 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:40.981 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:40.981 + diff -u /tmp/62.AfV /tmp/spdk_tgt_config.json.vUo 00:15:40.981 INFO: JSON config files are the same 00:15:40.981 + echo 'INFO: JSON config files are the same' 00:15:40.981 + rm /tmp/62.AfV /tmp/spdk_tgt_config.json.vUo 00:15:40.981 + exit 0 00:15:40.981 INFO: changing configuration and checking if this can be detected... 00:15:40.981 16:53:42 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:15:40.981 16:53:42 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:40.981 16:53:42 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:40.981 16:53:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:41.239 16:53:42 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:41.239 16:53:42 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:15:41.239 16:53:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:41.239 + '[' 2 -ne 2 ']' 00:15:41.239 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:41.239 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:41.239 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.239 +++ basename /dev/fd/62 00:15:41.239 ++ mktemp /tmp/62.XXX 00:15:41.239 + tmp_file_1=/tmp/62.ZWp 00:15:41.239 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:41.239 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:41.239 + tmp_file_2=/tmp/spdk_tgt_config.json.0B8 00:15:41.239 + ret=0 00:15:41.239 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:41.498 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:41.498 + diff -u /tmp/62.ZWp /tmp/spdk_tgt_config.json.0B8 00:15:41.498 + ret=1 00:15:41.498 + echo '=== Start of file: /tmp/62.ZWp ===' 00:15:41.498 + cat /tmp/62.ZWp 00:15:41.498 + echo '=== End of file: /tmp/62.ZWp ===' 00:15:41.498 + echo '' 00:15:41.498 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0B8 ===' 00:15:41.498 + cat /tmp/spdk_tgt_config.json.0B8 00:15:41.498 + echo '=== End of file: /tmp/spdk_tgt_config.json.0B8 ===' 00:15:41.498 + echo '' 00:15:41.498 + rm /tmp/62.ZWp /tmp/spdk_tgt_config.json.0B8 00:15:41.498 + exit 1 00:15:41.498 INFO: configuration change detected. 00:15:41.498 16:53:43 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:15:41.498 16:53:43 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:15:41.498 16:53:43 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:15:41.498 16:53:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.498 16:53:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.754 16:53:43 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:15:41.754 16:53:43 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:15:41.754 16:53:43 json_config -- json_config/json_config.sh@321 -- # [[ -n 59947 ]] 00:15:41.754 16:53:43 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:15:41.754 16:53:43 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:15:41.755 16:53:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.755 16:53:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@197 -- # uname -s 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:15:41.755 16:53:43 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:15:41.755 16:53:43 json_config -- common/autotest_common.sh@1031 -- # hash ceph 00:15:41.755 16:53:43 json_config -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:15:41.755 + base_dir=/var/tmp/ceph 00:15:41.755 + image=/var/tmp/ceph/ceph_raw.img 00:15:41.755 + dev=/dev/loop200 00:15:41.755 + pkill -9 ceph 00:15:41.755 + sleep 3 00:15:45.033 + umount /dev/loop200p2 00:15:45.033 umount: /dev/loop200p2: no mount point specified. 00:15:45.033 + losetup -d /dev/loop200 00:15:45.033 losetup: /dev/loop200: failed to use device: No such device 00:15:45.033 + rm -rf /var/tmp/ceph 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:15:45.033 16:53:46 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.033 16:53:46 json_config -- json_config/json_config.sh@327 -- # killprocess 59947 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@948 -- # '[' -z 59947 ']' 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@952 -- # kill -0 59947 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@953 -- # uname 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59947 00:15:45.033 killing process with pid 59947 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59947' 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@967 -- # kill 59947 00:15:45.033 16:53:46 json_config -- common/autotest_common.sh@972 -- # wait 59947 00:15:45.967 16:53:47 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:45.967 16:53:47 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:15:45.967 16:53:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.967 16:53:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 INFO: Success 00:15:45.967 16:53:47 json_config -- json_config/json_config.sh@332 -- # return 0 00:15:45.967 16:53:47 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:15:45.967 ************************************ 00:15:45.967 END TEST json_config 00:15:45.967 ************************************ 00:15:45.967 00:15:45.967 real 0m13.350s 00:15:45.967 user 0m16.019s 00:15:45.967 sys 0m2.092s 00:15:45.967 16:53:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.967 16:53:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 16:53:47 -- common/autotest_common.sh@1142 -- # return 0 00:15:45.967 16:53:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:45.967 16:53:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:45.967 16:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.967 16:53:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 ************************************ 00:15:45.967 START TEST json_config_extra_key 00:15:45.967 ************************************ 00:15:45.967 16:53:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:45.967 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.967 16:53:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ed8bf231-bc82-4919-8d10-e9b4f641cbc5 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.968 16:53:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.968 16:53:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.968 16:53:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.968 16:53:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.968 16:53:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.968 16:53:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.968 16:53:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:45.968 16:53:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.968 16:53:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:45.968 INFO: launching applications... 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:45.968 16:53:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:45.968 Waiting for target to run... 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60149 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60149 /var/tmp/spdk_tgt.sock 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60149 ']' 00:15:45.968 16:53:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.968 16:53:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:46.226 [2024-07-22 16:53:47.648217] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:46.226 [2024-07-22 16:53:47.648451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60149 ] 00:15:46.793 [2024-07-22 16:53:48.120491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.793 [2024-07-22 16:53:48.397037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.727 00:15:47.727 INFO: shutting down applications... 00:15:47.727 16:53:49 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.727 16:53:49 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:47.727 16:53:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:47.727 16:53:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60149 ]] 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60149 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:47.727 16:53:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:47.985 16:53:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:47.985 16:53:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:47.985 16:53:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:47.985 16:53:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:48.551 16:53:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:48.551 16:53:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:48.551 16:53:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:48.551 16:53:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:49.118 16:53:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:49.118 16:53:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:49.118 16:53:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:49.118 16:53:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:49.684 16:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:49.684 16:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:49.684 16:53:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:49.684 16:53:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:50.293 16:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:50.293 16:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:50.293 16:53:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:50.293 16:53:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60149 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:50.551 SPDK target shutdown done 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:50.551 16:53:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:50.551 Success 00:15:50.551 16:53:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:50.551 00:15:50.551 real 0m4.718s 00:15:50.551 user 0m4.220s 00:15:50.551 sys 0m0.644s 00:15:50.551 16:53:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.551 ************************************ 00:15:50.551 END TEST json_config_extra_key 00:15:50.551 ************************************ 00:15:50.551 16:53:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:50.551 16:53:52 -- common/autotest_common.sh@1142 -- # return 0 00:15:50.552 16:53:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:50.552 16:53:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.552 16:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.552 16:53:52 -- common/autotest_common.sh@10 -- # set +x 00:15:50.552 ************************************ 00:15:50.552 START TEST alias_rpc 00:15:50.552 ************************************ 00:15:50.552 16:53:52 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:50.810 * Looking for test storage... 00:15:50.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:50.810 16:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:50.810 16:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60254 00:15:50.810 16:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60254 00:15:50.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60254 ']' 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.810 16:53:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.810 16:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.810 [2024-07-22 16:53:52.388644] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:50.810 [2024-07-22 16:53:52.388850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:15:51.069 [2024-07-22 16:53:52.566019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.327 [2024-07-22 16:53:52.877473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.274 16:53:53 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.274 16:53:53 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:52.274 16:53:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:52.532 16:53:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60254 00:15:52.532 16:53:54 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60254 ']' 00:15:52.532 16:53:54 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60254 00:15:52.532 16:53:54 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:15:52.532 16:53:54 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60254 00:15:52.791 killing process with pid 60254 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60254' 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@967 -- # kill 60254 00:15:52.791 16:53:54 alias_rpc -- common/autotest_common.sh@972 -- # wait 60254 00:15:55.323 ************************************ 00:15:55.323 END TEST alias_rpc 00:15:55.323 ************************************ 00:15:55.323 00:15:55.323 real 0m4.384s 00:15:55.323 user 0m4.498s 00:15:55.323 sys 0m0.634s 00:15:55.323 16:53:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.323 16:53:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.323 16:53:56 -- common/autotest_common.sh@1142 -- # return 0 00:15:55.323 16:53:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:15:55.323 16:53:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:55.323 16:53:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:55.323 16:53:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.323 16:53:56 -- common/autotest_common.sh@10 -- # set +x 00:15:55.323 ************************************ 00:15:55.323 START TEST spdkcli_tcp 00:15:55.323 ************************************ 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:55.323 * Looking for test storage... 00:15:55.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60357 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60357 00:15:55.323 16:53:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60357 ']' 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.323 16:53:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.323 [2024-07-22 16:53:56.845746] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:55.323 [2024-07-22 16:53:56.846395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:15:55.582 [2024-07-22 16:53:57.027098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.840 [2024-07-22 16:53:57.372220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.840 [2024-07-22 16:53:57.372238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.835 16:53:58 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.835 16:53:58 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:15:56.835 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60385 00:15:56.835 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:15:56.835 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:15:57.093 [ 00:15:57.093 "bdev_malloc_delete", 00:15:57.093 "bdev_malloc_create", 00:15:57.093 "bdev_null_resize", 00:15:57.093 "bdev_null_delete", 00:15:57.093 "bdev_null_create", 00:15:57.093 "bdev_nvme_cuse_unregister", 00:15:57.093 "bdev_nvme_cuse_register", 00:15:57.093 "bdev_opal_new_user", 00:15:57.093 "bdev_opal_set_lock_state", 00:15:57.093 "bdev_opal_delete", 00:15:57.093 "bdev_opal_get_info", 00:15:57.093 "bdev_opal_create", 00:15:57.093 "bdev_nvme_opal_revert", 00:15:57.093 "bdev_nvme_opal_init", 00:15:57.093 "bdev_nvme_send_cmd", 00:15:57.093 "bdev_nvme_get_path_iostat", 00:15:57.093 "bdev_nvme_get_mdns_discovery_info", 00:15:57.093 "bdev_nvme_stop_mdns_discovery", 00:15:57.093 "bdev_nvme_start_mdns_discovery", 00:15:57.093 "bdev_nvme_set_multipath_policy", 00:15:57.093 "bdev_nvme_set_preferred_path", 00:15:57.093 "bdev_nvme_get_io_paths", 00:15:57.093 "bdev_nvme_remove_error_injection", 00:15:57.093 "bdev_nvme_add_error_injection", 00:15:57.093 "bdev_nvme_get_discovery_info", 00:15:57.093 "bdev_nvme_stop_discovery", 00:15:57.093 "bdev_nvme_start_discovery", 00:15:57.093 "bdev_nvme_get_controller_health_info", 00:15:57.093 "bdev_nvme_disable_controller", 00:15:57.093 "bdev_nvme_enable_controller", 00:15:57.093 "bdev_nvme_reset_controller", 00:15:57.093 "bdev_nvme_get_transport_statistics", 00:15:57.093 "bdev_nvme_apply_firmware", 00:15:57.093 "bdev_nvme_detach_controller", 00:15:57.093 "bdev_nvme_get_controllers", 00:15:57.093 "bdev_nvme_attach_controller", 00:15:57.093 "bdev_nvme_set_hotplug", 00:15:57.093 "bdev_nvme_set_options", 00:15:57.093 "bdev_passthru_delete", 00:15:57.093 "bdev_passthru_create", 00:15:57.093 "bdev_lvol_set_parent_bdev", 00:15:57.093 "bdev_lvol_set_parent", 00:15:57.093 "bdev_lvol_check_shallow_copy", 00:15:57.093 "bdev_lvol_start_shallow_copy", 00:15:57.093 "bdev_lvol_grow_lvstore", 00:15:57.093 "bdev_lvol_get_lvols", 00:15:57.093 "bdev_lvol_get_lvstores", 00:15:57.093 "bdev_lvol_delete", 00:15:57.093 "bdev_lvol_set_read_only", 00:15:57.093 "bdev_lvol_resize", 00:15:57.093 "bdev_lvol_decouple_parent", 00:15:57.093 "bdev_lvol_inflate", 00:15:57.093 "bdev_lvol_rename", 00:15:57.093 "bdev_lvol_clone_bdev", 00:15:57.093 "bdev_lvol_clone", 00:15:57.093 "bdev_lvol_snapshot", 00:15:57.093 "bdev_lvol_create", 00:15:57.093 "bdev_lvol_delete_lvstore", 00:15:57.093 "bdev_lvol_rename_lvstore", 00:15:57.093 "bdev_lvol_create_lvstore", 00:15:57.093 "bdev_raid_set_options", 00:15:57.093 "bdev_raid_remove_base_bdev", 00:15:57.093 "bdev_raid_add_base_bdev", 00:15:57.093 "bdev_raid_delete", 00:15:57.093 "bdev_raid_create", 00:15:57.093 "bdev_raid_get_bdevs", 00:15:57.093 "bdev_error_inject_error", 00:15:57.093 "bdev_error_delete", 00:15:57.093 "bdev_error_create", 00:15:57.093 "bdev_split_delete", 00:15:57.093 "bdev_split_create", 00:15:57.093 "bdev_delay_delete", 00:15:57.093 "bdev_delay_create", 00:15:57.093 "bdev_delay_update_latency", 00:15:57.093 "bdev_zone_block_delete", 00:15:57.093 "bdev_zone_block_create", 00:15:57.093 "blobfs_create", 00:15:57.093 "blobfs_detect", 00:15:57.093 "blobfs_set_cache_size", 00:15:57.093 "bdev_aio_delete", 00:15:57.093 "bdev_aio_rescan", 00:15:57.093 "bdev_aio_create", 00:15:57.093 "bdev_ftl_set_property", 00:15:57.093 "bdev_ftl_get_properties", 00:15:57.093 "bdev_ftl_get_stats", 00:15:57.093 "bdev_ftl_unmap", 00:15:57.093 "bdev_ftl_unload", 00:15:57.093 "bdev_ftl_delete", 00:15:57.093 "bdev_ftl_load", 00:15:57.093 "bdev_ftl_create", 00:15:57.093 "bdev_virtio_attach_controller", 00:15:57.093 "bdev_virtio_scsi_get_devices", 00:15:57.093 "bdev_virtio_detach_controller", 00:15:57.094 "bdev_virtio_blk_set_hotplug", 00:15:57.094 "bdev_iscsi_delete", 00:15:57.094 "bdev_iscsi_create", 00:15:57.094 "bdev_iscsi_set_options", 00:15:57.094 "bdev_rbd_get_clusters_info", 00:15:57.094 "bdev_rbd_unregister_cluster", 00:15:57.094 "bdev_rbd_register_cluster", 00:15:57.094 "bdev_rbd_resize", 00:15:57.094 "bdev_rbd_delete", 00:15:57.094 "bdev_rbd_create", 00:15:57.094 "accel_error_inject_error", 00:15:57.094 "ioat_scan_accel_module", 00:15:57.094 "dsa_scan_accel_module", 00:15:57.094 "iaa_scan_accel_module", 00:15:57.094 "keyring_file_remove_key", 00:15:57.094 "keyring_file_add_key", 00:15:57.094 "keyring_linux_set_options", 00:15:57.094 "iscsi_get_histogram", 00:15:57.094 "iscsi_enable_histogram", 00:15:57.094 "iscsi_set_options", 00:15:57.094 "iscsi_get_auth_groups", 00:15:57.094 "iscsi_auth_group_remove_secret", 00:15:57.094 "iscsi_auth_group_add_secret", 00:15:57.094 "iscsi_delete_auth_group", 00:15:57.094 "iscsi_create_auth_group", 00:15:57.094 "iscsi_set_discovery_auth", 00:15:57.094 "iscsi_get_options", 00:15:57.094 "iscsi_target_node_request_logout", 00:15:57.094 "iscsi_target_node_set_redirect", 00:15:57.094 "iscsi_target_node_set_auth", 00:15:57.094 "iscsi_target_node_add_lun", 00:15:57.094 "iscsi_get_stats", 00:15:57.094 "iscsi_get_connections", 00:15:57.094 "iscsi_portal_group_set_auth", 00:15:57.094 "iscsi_start_portal_group", 00:15:57.094 "iscsi_delete_portal_group", 00:15:57.094 "iscsi_create_portal_group", 00:15:57.094 "iscsi_get_portal_groups", 00:15:57.094 "iscsi_delete_target_node", 00:15:57.094 "iscsi_target_node_remove_pg_ig_maps", 00:15:57.094 "iscsi_target_node_add_pg_ig_maps", 00:15:57.094 "iscsi_create_target_node", 00:15:57.094 "iscsi_get_target_nodes", 00:15:57.094 "iscsi_delete_initiator_group", 00:15:57.094 "iscsi_initiator_group_remove_initiators", 00:15:57.094 "iscsi_initiator_group_add_initiators", 00:15:57.094 "iscsi_create_initiator_group", 00:15:57.094 "iscsi_get_initiator_groups", 00:15:57.094 "nvmf_set_crdt", 00:15:57.094 "nvmf_set_config", 00:15:57.094 "nvmf_set_max_subsystems", 00:15:57.094 "nvmf_stop_mdns_prr", 00:15:57.094 "nvmf_publish_mdns_prr", 00:15:57.094 "nvmf_subsystem_get_listeners", 00:15:57.094 "nvmf_subsystem_get_qpairs", 00:15:57.094 "nvmf_subsystem_get_controllers", 00:15:57.094 "nvmf_get_stats", 00:15:57.094 "nvmf_get_transports", 00:15:57.094 "nvmf_create_transport", 00:15:57.094 "nvmf_get_targets", 00:15:57.094 "nvmf_delete_target", 00:15:57.094 "nvmf_create_target", 00:15:57.094 "nvmf_subsystem_allow_any_host", 00:15:57.094 "nvmf_subsystem_remove_host", 00:15:57.094 "nvmf_subsystem_add_host", 00:15:57.094 "nvmf_ns_remove_host", 00:15:57.094 "nvmf_ns_add_host", 00:15:57.094 "nvmf_subsystem_remove_ns", 00:15:57.094 "nvmf_subsystem_add_ns", 00:15:57.094 "nvmf_subsystem_listener_set_ana_state", 00:15:57.094 "nvmf_discovery_get_referrals", 00:15:57.094 "nvmf_discovery_remove_referral", 00:15:57.094 "nvmf_discovery_add_referral", 00:15:57.094 "nvmf_subsystem_remove_listener", 00:15:57.094 "nvmf_subsystem_add_listener", 00:15:57.094 "nvmf_delete_subsystem", 00:15:57.094 "nvmf_create_subsystem", 00:15:57.094 "nvmf_get_subsystems", 00:15:57.094 "env_dpdk_get_mem_stats", 00:15:57.094 "nbd_get_disks", 00:15:57.094 "nbd_stop_disk", 00:15:57.094 "nbd_start_disk", 00:15:57.094 "ublk_recover_disk", 00:15:57.094 "ublk_get_disks", 00:15:57.094 "ublk_stop_disk", 00:15:57.094 "ublk_start_disk", 00:15:57.094 "ublk_destroy_target", 00:15:57.094 "ublk_create_target", 00:15:57.094 "virtio_blk_create_transport", 00:15:57.094 "virtio_blk_get_transports", 00:15:57.094 "vhost_controller_set_coalescing", 00:15:57.094 "vhost_get_controllers", 00:15:57.094 "vhost_delete_controller", 00:15:57.094 "vhost_create_blk_controller", 00:15:57.094 "vhost_scsi_controller_remove_target", 00:15:57.094 "vhost_scsi_controller_add_target", 00:15:57.094 "vhost_start_scsi_controller", 00:15:57.094 "vhost_create_scsi_controller", 00:15:57.094 "thread_set_cpumask", 00:15:57.094 "framework_get_governor", 00:15:57.094 "framework_get_scheduler", 00:15:57.094 "framework_set_scheduler", 00:15:57.094 "framework_get_reactors", 00:15:57.094 "thread_get_io_channels", 00:15:57.094 "thread_get_pollers", 00:15:57.094 "thread_get_stats", 00:15:57.094 "framework_monitor_context_switch", 00:15:57.094 "spdk_kill_instance", 00:15:57.094 "log_enable_timestamps", 00:15:57.094 "log_get_flags", 00:15:57.094 "log_clear_flag", 00:15:57.094 "log_set_flag", 00:15:57.094 "log_get_level", 00:15:57.094 "log_set_level", 00:15:57.094 "log_get_print_level", 00:15:57.094 "log_set_print_level", 00:15:57.094 "framework_enable_cpumask_locks", 00:15:57.094 "framework_disable_cpumask_locks", 00:15:57.094 "framework_wait_init", 00:15:57.094 "framework_start_init", 00:15:57.094 "scsi_get_devices", 00:15:57.094 "bdev_get_histogram", 00:15:57.094 "bdev_enable_histogram", 00:15:57.094 "bdev_set_qos_limit", 00:15:57.094 "bdev_set_qd_sampling_period", 00:15:57.094 "bdev_get_bdevs", 00:15:57.094 "bdev_reset_iostat", 00:15:57.094 "bdev_get_iostat", 00:15:57.094 "bdev_examine", 00:15:57.094 "bdev_wait_for_examine", 00:15:57.094 "bdev_set_options", 00:15:57.094 "notify_get_notifications", 00:15:57.094 "notify_get_types", 00:15:57.094 "accel_get_stats", 00:15:57.094 "accel_set_options", 00:15:57.094 "accel_set_driver", 00:15:57.094 "accel_crypto_key_destroy", 00:15:57.094 "accel_crypto_keys_get", 00:15:57.094 "accel_crypto_key_create", 00:15:57.094 "accel_assign_opc", 00:15:57.094 "accel_get_module_info", 00:15:57.094 "accel_get_opc_assignments", 00:15:57.094 "vmd_rescan", 00:15:57.094 "vmd_remove_device", 00:15:57.094 "vmd_enable", 00:15:57.094 "sock_get_default_impl", 00:15:57.094 "sock_set_default_impl", 00:15:57.094 "sock_impl_set_options", 00:15:57.094 "sock_impl_get_options", 00:15:57.094 "iobuf_get_stats", 00:15:57.094 "iobuf_set_options", 00:15:57.094 "framework_get_pci_devices", 00:15:57.094 "framework_get_config", 00:15:57.094 "framework_get_subsystems", 00:15:57.094 "trace_get_info", 00:15:57.094 "trace_get_tpoint_group_mask", 00:15:57.094 "trace_disable_tpoint_group", 00:15:57.094 "trace_enable_tpoint_group", 00:15:57.094 "trace_clear_tpoint_mask", 00:15:57.094 "trace_set_tpoint_mask", 00:15:57.094 "keyring_get_keys", 00:15:57.094 "spdk_get_version", 00:15:57.094 "rpc_get_methods" 00:15:57.094 ] 00:15:57.094 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.094 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:57.094 16:53:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60357 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60357 ']' 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60357 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60357 00:15:57.094 killing process with pid 60357 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60357' 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60357 00:15:57.094 16:53:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60357 00:15:59.625 ************************************ 00:15:59.625 END TEST spdkcli_tcp 00:15:59.625 ************************************ 00:15:59.625 00:15:59.625 real 0m4.495s 00:15:59.625 user 0m7.707s 00:15:59.625 sys 0m0.754s 00:15:59.625 16:54:01 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.625 16:54:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 16:54:01 -- common/autotest_common.sh@1142 -- # return 0 00:15:59.625 16:54:01 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:59.625 16:54:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:59.625 16:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.625 16:54:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 ************************************ 00:15:59.625 START TEST dpdk_mem_utility 00:15:59.625 ************************************ 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:59.625 * Looking for test storage... 00:15:59.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:59.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.625 16:54:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:59.625 16:54:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60482 00:15:59.625 16:54:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:59.625 16:54:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60482 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60482 ']' 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.625 16:54:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:59.882 [2024-07-22 16:54:01.317414] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:59.883 [2024-07-22 16:54:01.317607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:15:59.883 [2024-07-22 16:54:01.481003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.447 [2024-07-22 16:54:01.779067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.383 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.383 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:16:01.383 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:01.383 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:01.383 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.383 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:01.383 { 00:16:01.383 "filename": "/tmp/spdk_mem_dump.txt" 00:16:01.383 } 00:16:01.383 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.383 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:01.383 DPDK memory size 820.000000 MiB in 1 heap(s) 00:16:01.383 1 heaps totaling size 820.000000 MiB 00:16:01.383 size: 820.000000 MiB heap id: 0 00:16:01.383 end heaps---------- 00:16:01.383 8 mempools totaling size 598.116089 MiB 00:16:01.383 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:01.383 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:01.383 size: 84.521057 MiB name: bdev_io_60482 00:16:01.383 size: 51.011292 MiB name: evtpool_60482 00:16:01.383 size: 50.003479 MiB name: msgpool_60482 00:16:01.383 size: 21.763794 MiB name: PDU_Pool 00:16:01.383 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:01.383 size: 0.026123 MiB name: Session_Pool 00:16:01.383 end mempools------- 00:16:01.383 6 memzones totaling size 4.142822 MiB 00:16:01.383 size: 1.000366 MiB name: RG_ring_0_60482 00:16:01.383 size: 1.000366 MiB name: RG_ring_1_60482 00:16:01.383 size: 1.000366 MiB name: RG_ring_4_60482 00:16:01.383 size: 1.000366 MiB name: RG_ring_5_60482 00:16:01.383 size: 0.125366 MiB name: RG_ring_2_60482 00:16:01.383 size: 0.015991 MiB name: RG_ring_3_60482 00:16:01.383 end memzones------- 00:16:01.383 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:01.383 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:16:01.383 list of free elements. size: 18.452515 MiB 00:16:01.383 element at address: 0x200000400000 with size: 1.999451 MiB 00:16:01.383 element at address: 0x200000800000 with size: 1.996887 MiB 00:16:01.383 element at address: 0x200007000000 with size: 1.995972 MiB 00:16:01.383 element at address: 0x20000b200000 with size: 1.995972 MiB 00:16:01.383 element at address: 0x200019100040 with size: 0.999939 MiB 00:16:01.383 element at address: 0x200019500040 with size: 0.999939 MiB 00:16:01.383 element at address: 0x200019600000 with size: 0.999084 MiB 00:16:01.383 element at address: 0x200003e00000 with size: 0.996094 MiB 00:16:01.383 element at address: 0x200032200000 with size: 0.994324 MiB 00:16:01.383 element at address: 0x200018e00000 with size: 0.959656 MiB 00:16:01.383 element at address: 0x200019900040 with size: 0.936401 MiB 00:16:01.383 element at address: 0x200000200000 with size: 0.830200 MiB 00:16:01.383 element at address: 0x20001b000000 with size: 0.565125 MiB 00:16:01.383 element at address: 0x200019200000 with size: 0.487976 MiB 00:16:01.383 element at address: 0x200019a00000 with size: 0.485413 MiB 00:16:01.383 element at address: 0x200013800000 with size: 0.467651 MiB 00:16:01.383 element at address: 0x200028400000 with size: 0.390442 MiB 00:16:01.383 element at address: 0x200003a00000 with size: 0.351990 MiB 00:16:01.383 list of standard malloc elements. size: 199.283081 MiB 00:16:01.383 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:16:01.383 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:16:01.383 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:16:01.383 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:16:01.383 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:16:01.383 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:16:01.383 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:16:01.383 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:16:01.383 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:16:01.383 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:16:01.383 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:16:01.383 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:16:01.383 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:16:01.383 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003aff980 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003affa80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200003eff000 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013877b80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013877c80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013877d80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013877e80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013877f80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878080 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878180 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878280 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878380 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878480 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200013878580 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200019abc680 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200028463f40 with size: 0.000244 MiB 00:16:01.384 element at address: 0x200028464040 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846af80 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b080 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b180 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b280 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b380 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b480 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b580 with size: 0.000244 MiB 00:16:01.384 element at address: 0x20002846b680 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846b780 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846b880 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846b980 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846be80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c080 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c180 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c280 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c380 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c480 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c580 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c680 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c780 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c880 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846c980 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d080 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d180 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d280 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d380 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d480 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d580 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d680 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d780 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d880 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846d980 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846da80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846db80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846de80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846df80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e080 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e180 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e280 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e380 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e480 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e580 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e680 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e780 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e880 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846e980 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f080 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f180 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f280 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f380 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f480 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f580 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f680 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f780 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f880 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846f980 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:16:01.385 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:16:01.385 list of memzone associated elements. size: 602.264404 MiB 00:16:01.385 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:16:01.385 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:01.385 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:16:01.385 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:01.385 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:16:01.385 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60482_0 00:16:01.385 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:16:01.385 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60482_0 00:16:01.385 element at address: 0x200003fff340 with size: 48.003113 MiB 00:16:01.385 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60482_0 00:16:01.385 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:16:01.385 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:01.385 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:16:01.385 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:01.385 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:16:01.385 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60482 00:16:01.385 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:16:01.385 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60482 00:16:01.385 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:16:01.385 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60482 00:16:01.385 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:16:01.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:01.385 element at address: 0x200019abc780 with size: 1.008179 MiB 00:16:01.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:01.385 element at address: 0x200018efde00 with size: 1.008179 MiB 00:16:01.385 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:01.385 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:16:01.385 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:01.385 element at address: 0x200003eff100 with size: 1.000549 MiB 00:16:01.385 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60482 00:16:01.385 element at address: 0x200003affb80 with size: 1.000549 MiB 00:16:01.385 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60482 00:16:01.385 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:16:01.385 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60482 00:16:01.385 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:16:01.385 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60482 00:16:01.385 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:16:01.385 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60482 00:16:01.385 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:16:01.385 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:01.385 element at address: 0x200013878680 with size: 0.500549 MiB 00:16:01.385 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:01.385 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:16:01.385 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:01.385 element at address: 0x200003adf740 with size: 0.125549 MiB 00:16:01.385 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60482 00:16:01.385 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:16:01.385 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:01.385 element at address: 0x200028464140 with size: 0.023804 MiB 00:16:01.385 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:01.385 element at address: 0x200003adb500 with size: 0.016174 MiB 00:16:01.385 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60482 00:16:01.385 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:16:01.385 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:01.385 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:16:01.385 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60482 00:16:01.385 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:16:01.385 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60482 00:16:01.385 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:16:01.385 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:01.385 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:01.385 16:54:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60482 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60482 ']' 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60482 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60482 00:16:01.385 killing process with pid 60482 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60482' 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60482 00:16:01.385 16:54:02 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60482 00:16:03.913 00:16:03.913 real 0m3.975s 00:16:03.913 user 0m4.000s 00:16:03.913 sys 0m0.590s 00:16:03.913 ************************************ 00:16:03.913 END TEST dpdk_mem_utility 00:16:03.913 ************************************ 00:16:03.913 16:54:05 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.913 16:54:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:03.913 16:54:05 -- common/autotest_common.sh@1142 -- # return 0 00:16:03.913 16:54:05 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:03.913 16:54:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:03.913 16:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.913 16:54:05 -- common/autotest_common.sh@10 -- # set +x 00:16:03.913 ************************************ 00:16:03.913 START TEST event 00:16:03.913 ************************************ 00:16:03.913 16:54:05 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:03.913 * Looking for test storage... 00:16:03.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:03.913 16:54:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:03.913 16:54:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:03.913 16:54:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:03.913 16:54:05 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:03.913 16:54:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.913 16:54:05 event -- common/autotest_common.sh@10 -- # set +x 00:16:03.913 ************************************ 00:16:03.913 START TEST event_perf 00:16:03.913 ************************************ 00:16:03.913 16:54:05 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:03.913 Running I/O for 1 seconds...[2024-07-22 16:54:05.292446] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:03.913 [2024-07-22 16:54:05.292607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:16:03.913 [2024-07-22 16:54:05.464015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.171 [2024-07-22 16:54:05.711980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.171 [2024-07-22 16:54:05.712214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.171 [2024-07-22 16:54:05.713044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.171 Running I/O for 1 seconds...[2024-07-22 16:54:05.713076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.542 00:16:05.542 lcore 0: 125287 00:16:05.542 lcore 1: 125288 00:16:05.542 lcore 2: 125289 00:16:05.542 lcore 3: 125287 00:16:05.542 done. 00:16:05.542 ************************************ 00:16:05.542 END TEST event_perf 00:16:05.542 ************************************ 00:16:05.542 00:16:05.542 real 0m1.885s 00:16:05.542 user 0m4.618s 00:16:05.542 sys 0m0.139s 00:16:05.542 16:54:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.542 16:54:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:05.800 16:54:07 event -- common/autotest_common.sh@1142 -- # return 0 00:16:05.800 16:54:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:05.800 16:54:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:05.800 16:54:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.800 16:54:07 event -- common/autotest_common.sh@10 -- # set +x 00:16:05.800 ************************************ 00:16:05.800 START TEST event_reactor 00:16:05.800 ************************************ 00:16:05.800 16:54:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:05.800 [2024-07-22 16:54:07.231224] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:05.800 [2024-07-22 16:54:07.231411] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60616 ] 00:16:05.800 [2024-07-22 16:54:07.408527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.367 [2024-07-22 16:54:07.703068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.745 test_start 00:16:07.745 oneshot 00:16:07.745 tick 100 00:16:07.745 tick 100 00:16:07.745 tick 250 00:16:07.745 tick 100 00:16:07.745 tick 100 00:16:07.745 tick 100 00:16:07.745 tick 250 00:16:07.745 tick 500 00:16:07.745 tick 100 00:16:07.745 tick 100 00:16:07.745 tick 250 00:16:07.745 tick 100 00:16:07.745 tick 100 00:16:07.745 test_end 00:16:07.745 00:16:07.745 real 0m1.924s 00:16:07.745 user 0m1.689s 00:16:07.745 sys 0m0.124s 00:16:07.745 16:54:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.745 16:54:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:07.745 ************************************ 00:16:07.745 END TEST event_reactor 00:16:07.745 ************************************ 00:16:07.745 16:54:09 event -- common/autotest_common.sh@1142 -- # return 0 00:16:07.746 16:54:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:07.746 16:54:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:07.746 16:54:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.746 16:54:09 event -- common/autotest_common.sh@10 -- # set +x 00:16:07.746 ************************************ 00:16:07.746 START TEST event_reactor_perf 00:16:07.746 ************************************ 00:16:07.746 16:54:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:07.746 [2024-07-22 16:54:09.211170] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:07.746 [2024-07-22 16:54:09.211350] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60658 ] 00:16:08.004 [2024-07-22 16:54:09.386085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.262 [2024-07-22 16:54:09.646854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.636 test_start 00:16:09.636 test_end 00:16:09.636 Performance: 274015 events per second 00:16:09.636 00:16:09.636 real 0m1.884s 00:16:09.636 user 0m1.650s 00:16:09.636 sys 0m0.123s 00:16:09.636 16:54:11 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.636 16:54:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:09.636 ************************************ 00:16:09.636 END TEST event_reactor_perf 00:16:09.636 ************************************ 00:16:09.636 16:54:11 event -- common/autotest_common.sh@1142 -- # return 0 00:16:09.636 16:54:11 event -- event/event.sh@49 -- # uname -s 00:16:09.636 16:54:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:09.636 16:54:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:09.636 16:54:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.636 16:54:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.636 16:54:11 event -- common/autotest_common.sh@10 -- # set +x 00:16:09.636 ************************************ 00:16:09.636 START TEST event_scheduler 00:16:09.636 ************************************ 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:09.636 * Looking for test storage... 00:16:09.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:09.636 16:54:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:09.636 16:54:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60726 00:16:09.636 16:54:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:09.636 16:54:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:09.636 16:54:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60726 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60726 ']' 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.636 16:54:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:09.894 [2024-07-22 16:54:11.348697] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:09.895 [2024-07-22 16:54:11.348913] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:16:10.153 [2024-07-22 16:54:11.524651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.411 [2024-07-22 16:54:11.828648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.411 [2024-07-22 16:54:11.828771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.411 [2024-07-22 16:54:11.829672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.411 [2024-07-22 16:54:11.829675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.670 16:54:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.670 16:54:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:16:10.670 16:54:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:10.670 16:54:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.670 16:54:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:10.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:10.670 POWER: Cannot set governor of lcore 0 to userspace 00:16:10.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:10.670 POWER: Cannot set governor of lcore 0 to performance 00:16:10.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:10.670 POWER: Cannot set governor of lcore 0 to userspace 00:16:10.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:10.670 POWER: Cannot set governor of lcore 0 to userspace 00:16:10.670 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:16:10.670 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:10.670 POWER: Unable to set Power Management Environment for lcore 0 00:16:10.670 [2024-07-22 16:54:12.280412] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:16:10.670 [2024-07-22 16:54:12.280436] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:16:10.670 [2024-07-22 16:54:12.280454] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:16:10.670 [2024-07-22 16:54:12.280474] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:16:10.670 [2024-07-22 16:54:12.280490] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:16:10.670 [2024-07-22 16:54:12.280502] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:16:10.928 16:54:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.928 16:54:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:10.928 16:54:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.928 16:54:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:11.186 [2024-07-22 16:54:12.603172] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:11.186 16:54:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.186 16:54:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:11.186 16:54:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:11.186 16:54:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.186 16:54:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:11.186 ************************************ 00:16:11.186 START TEST scheduler_create_thread 00:16:11.186 ************************************ 00:16:11.186 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:16:11.186 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:11.186 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.186 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 2 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 3 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 4 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 5 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 6 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 7 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 8 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 9 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 10 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.187 16:54:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:12.135 16:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.135 16:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:12.135 16:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:12.135 16:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.135 16:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:13.509 ************************************ 00:16:13.509 END TEST scheduler_create_thread 00:16:13.509 ************************************ 00:16:13.509 16:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.509 00:16:13.509 real 0m2.139s 00:16:13.509 user 0m0.017s 00:16:13.509 sys 0m0.007s 00:16:13.509 16:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.509 16:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:16:13.509 16:54:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:13.509 16:54:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60726 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60726 ']' 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60726 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60726 00:16:13.509 killing process with pid 60726 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60726' 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60726 00:16:13.509 16:54:14 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60726 00:16:13.767 [2024-07-22 16:54:15.234218] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:15.140 ************************************ 00:16:15.140 END TEST event_scheduler 00:16:15.140 ************************************ 00:16:15.140 00:16:15.140 real 0m5.419s 00:16:15.140 user 0m8.583s 00:16:15.140 sys 0m0.516s 00:16:15.140 16:54:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.140 16:54:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:15.140 16:54:16 event -- common/autotest_common.sh@1142 -- # return 0 00:16:15.140 16:54:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:16:15.140 16:54:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:15.140 16:54:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:15.140 16:54:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.140 16:54:16 event -- common/autotest_common.sh@10 -- # set +x 00:16:15.140 ************************************ 00:16:15.140 START TEST app_repeat 00:16:15.141 ************************************ 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60838 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:15.141 Process app_repeat pid: 60838 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60838' 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:15.141 spdk_app_start Round 0 00:16:15.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:15.141 16:54:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60838 /var/tmp/spdk-nbd.sock 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60838 ']' 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.141 16:54:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:15.141 [2024-07-22 16:54:16.671095] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:15.141 [2024-07-22 16:54:16.671425] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60838 ] 00:16:15.399 [2024-07-22 16:54:16.869099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.657 [2024-07-22 16:54:17.201071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.657 [2024-07-22 16:54:17.201071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.222 16:54:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.222 16:54:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:16:16.222 16:54:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:16.480 Malloc0 00:16:16.480 16:54:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:17.046 Malloc1 00:16:17.046 16:54:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.046 16:54:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:17.304 /dev/nbd0 00:16:17.304 16:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.304 16:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:17.304 1+0 records in 00:16:17.304 1+0 records out 00:16:17.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265176 s, 15.4 MB/s 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:17.304 16:54:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:17.304 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.304 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.304 16:54:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:17.577 /dev/nbd1 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:17.577 1+0 records in 00:16:17.577 1+0 records out 00:16:17.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355888 s, 11.5 MB/s 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:17.577 16:54:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.577 16:54:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:17.836 { 00:16:17.836 "nbd_device": "/dev/nbd0", 00:16:17.836 "bdev_name": "Malloc0" 00:16:17.836 }, 00:16:17.836 { 00:16:17.836 "nbd_device": "/dev/nbd1", 00:16:17.836 "bdev_name": "Malloc1" 00:16:17.836 } 00:16:17.836 ]' 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:17.836 { 00:16:17.836 "nbd_device": "/dev/nbd0", 00:16:17.836 "bdev_name": "Malloc0" 00:16:17.836 }, 00:16:17.836 { 00:16:17.836 "nbd_device": "/dev/nbd1", 00:16:17.836 "bdev_name": "Malloc1" 00:16:17.836 } 00:16:17.836 ]' 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:17.836 /dev/nbd1' 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:17.836 /dev/nbd1' 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:17.836 16:54:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:17.837 256+0 records in 00:16:17.837 256+0 records out 00:16:17.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00797387 s, 132 MB/s 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:17.837 256+0 records in 00:16:17.837 256+0 records out 00:16:17.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315756 s, 33.2 MB/s 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:17.837 256+0 records in 00:16:17.837 256+0 records out 00:16:17.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033245 s, 31.5 MB/s 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.837 16:54:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.095 16:54:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.353 16:54:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:18.611 16:54:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:18.611 16:54:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:19.176 16:54:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:20.547 [2024-07-22 16:54:21.908124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:20.547 [2024-07-22 16:54:22.144136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.547 [2024-07-22 16:54:22.144146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.805 [2024-07-22 16:54:22.335741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:20.805 [2024-07-22 16:54:22.335877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:22.175 spdk_app_start Round 1 00:16:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:22.175 16:54:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:22.175 16:54:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:22.175 16:54:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60838 /var/tmp/spdk-nbd.sock 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60838 ']' 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.175 16:54:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:22.432 16:54:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.432 16:54:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:16:22.432 16:54:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:22.689 Malloc0 00:16:22.689 16:54:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:22.946 Malloc1 00:16:23.204 16:54:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:23.204 /dev/nbd0 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.204 16:54:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:23.204 16:54:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:23.204 1+0 records in 00:16:23.204 1+0 records out 00:16:23.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042683 s, 9.6 MB/s 00:16:23.464 16:54:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:23.464 16:54:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:23.464 16:54:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:23.464 16:54:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:23.464 16:54:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:23.464 16:54:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.464 16:54:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.464 16:54:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:23.464 /dev/nbd1 00:16:23.464 16:54:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:23.464 16:54:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:23.464 16:54:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:23.464 1+0 records in 00:16:23.464 1+0 records out 00:16:23.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031623 s, 13.0 MB/s 00:16:23.722 16:54:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:23.722 16:54:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:23.722 16:54:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:23.722 16:54:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:23.722 16:54:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:23.722 16:54:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.722 16:54:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.722 16:54:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:23.722 16:54:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.722 16:54:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:23.980 { 00:16:23.980 "nbd_device": "/dev/nbd0", 00:16:23.980 "bdev_name": "Malloc0" 00:16:23.980 }, 00:16:23.980 { 00:16:23.980 "nbd_device": "/dev/nbd1", 00:16:23.980 "bdev_name": "Malloc1" 00:16:23.980 } 00:16:23.980 ]' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:23.980 { 00:16:23.980 "nbd_device": "/dev/nbd0", 00:16:23.980 "bdev_name": "Malloc0" 00:16:23.980 }, 00:16:23.980 { 00:16:23.980 "nbd_device": "/dev/nbd1", 00:16:23.980 "bdev_name": "Malloc1" 00:16:23.980 } 00:16:23.980 ]' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:23.980 /dev/nbd1' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:23.980 /dev/nbd1' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:23.980 256+0 records in 00:16:23.980 256+0 records out 00:16:23.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719407 s, 146 MB/s 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:23.980 256+0 records in 00:16:23.980 256+0 records out 00:16:23.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256446 s, 40.9 MB/s 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:23.980 256+0 records in 00:16:23.980 256+0 records out 00:16:23.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0388147 s, 27.0 MB/s 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.980 16:54:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.981 16:54:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.981 16:54:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:23.981 16:54:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.981 16:54:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.237 16:54:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:24.495 16:54:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.495 16:54:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.495 16:54:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.495 16:54:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.495 16:54:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:24.495 16:54:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:24.752 16:54:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:24.752 16:54:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:25.318 16:54:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:26.692 [2024-07-22 16:54:28.001867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.692 [2024-07-22 16:54:28.239172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.692 [2024-07-22 16:54:28.239175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.950 [2024-07-22 16:54:28.431513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:26.950 [2024-07-22 16:54:28.431608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:28.322 spdk_app_start Round 2 00:16:28.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:28.322 16:54:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:28.322 16:54:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:28.322 16:54:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60838 /var/tmp/spdk-nbd.sock 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60838 ']' 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.322 16:54:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:28.579 16:54:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.579 16:54:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:16:28.579 16:54:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:28.836 Malloc0 00:16:28.836 16:54:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:29.094 Malloc1 00:16:29.094 16:54:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.094 16:54:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:29.386 /dev/nbd0 00:16:29.644 16:54:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:29.645 16:54:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:29.645 1+0 records in 00:16:29.645 1+0 records out 00:16:29.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628488 s, 6.5 MB/s 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:29.645 16:54:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:29.645 16:54:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.645 16:54:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.645 16:54:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:29.902 /dev/nbd1 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:29.902 1+0 records in 00:16:29.902 1+0 records out 00:16:29.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324478 s, 12.6 MB/s 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:29.902 16:54:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.902 16:54:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:30.160 { 00:16:30.160 "nbd_device": "/dev/nbd0", 00:16:30.160 "bdev_name": "Malloc0" 00:16:30.160 }, 00:16:30.160 { 00:16:30.160 "nbd_device": "/dev/nbd1", 00:16:30.160 "bdev_name": "Malloc1" 00:16:30.160 } 00:16:30.160 ]' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:30.160 { 00:16:30.160 "nbd_device": "/dev/nbd0", 00:16:30.160 "bdev_name": "Malloc0" 00:16:30.160 }, 00:16:30.160 { 00:16:30.160 "nbd_device": "/dev/nbd1", 00:16:30.160 "bdev_name": "Malloc1" 00:16:30.160 } 00:16:30.160 ]' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:30.160 /dev/nbd1' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:30.160 /dev/nbd1' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:30.160 256+0 records in 00:16:30.160 256+0 records out 00:16:30.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752604 s, 139 MB/s 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:30.160 256+0 records in 00:16:30.160 256+0 records out 00:16:30.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302008 s, 34.7 MB/s 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:30.160 256+0 records in 00:16:30.160 256+0 records out 00:16:30.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273779 s, 38.3 MB/s 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:30.160 16:54:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.161 16:54:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.418 16:54:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.676 16:54:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:30.933 16:54:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:30.934 16:54:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:30.934 16:54:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:30.934 16:54:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:30.934 16:54:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:30.934 16:54:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:31.192 16:54:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:31.192 16:54:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:31.449 16:54:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:32.822 [2024-07-22 16:54:34.182508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.822 [2024-07-22 16:54:34.419631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.822 [2024-07-22 16:54:34.419641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.080 [2024-07-22 16:54:34.615305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:33.080 [2024-07-22 16:54:34.615381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:34.453 16:54:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60838 /var/tmp/spdk-nbd.sock 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60838 ']' 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:34.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.453 16:54:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:16:34.711 16:54:36 event.app_repeat -- event/event.sh@39 -- # killprocess 60838 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60838 ']' 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60838 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60838 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60838' 00:16:34.711 killing process with pid 60838 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60838 00:16:34.711 16:54:36 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60838 00:16:36.085 spdk_app_start is called in Round 0. 00:16:36.085 Shutdown signal received, stop current app iteration 00:16:36.085 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:16:36.085 spdk_app_start is called in Round 1. 00:16:36.085 Shutdown signal received, stop current app iteration 00:16:36.085 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:16:36.085 spdk_app_start is called in Round 2. 00:16:36.085 Shutdown signal received, stop current app iteration 00:16:36.085 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:16:36.085 spdk_app_start is called in Round 3. 00:16:36.085 Shutdown signal received, stop current app iteration 00:16:36.085 ************************************ 00:16:36.085 END TEST app_repeat 00:16:36.085 ************************************ 00:16:36.085 16:54:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:36.085 16:54:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:36.085 00:16:36.085 real 0m20.823s 00:16:36.085 user 0m44.135s 00:16:36.085 sys 0m2.998s 00:16:36.085 16:54:37 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.085 16:54:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 16:54:37 event -- common/autotest_common.sh@1142 -- # return 0 00:16:36.085 16:54:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:36.085 16:54:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:36.085 16:54:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:36.085 16:54:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.085 16:54:37 event -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 ************************************ 00:16:36.085 START TEST cpu_locks 00:16:36.085 ************************************ 00:16:36.085 16:54:37 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:36.085 * Looking for test storage... 00:16:36.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:36.085 16:54:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:36.085 16:54:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:36.085 16:54:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:36.085 16:54:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:36.085 16:54:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:36.085 16:54:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.085 16:54:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 ************************************ 00:16:36.085 START TEST default_locks 00:16:36.085 ************************************ 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61290 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61290 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61290 ']' 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.085 16:54:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 [2024-07-22 16:54:37.697287] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:36.085 [2024-07-22 16:54:37.697490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:16:36.343 [2024-07-22 16:54:37.862857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.601 [2024-07-22 16:54:38.165794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.534 16:54:38 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.534 16:54:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:16:37.534 16:54:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61290 00:16:37.534 16:54:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61290 00:16:37.534 16:54:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61290 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61290 ']' 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61290 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.792 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61290 00:16:38.050 killing process with pid 61290 00:16:38.050 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.050 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.050 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61290' 00:16:38.050 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61290 00:16:38.050 16:54:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61290 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61290 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61290 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61290 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61290 ']' 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.581 ERROR: process (pid: 61290) is no longer running 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61290) - No such process 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:40.581 00:16:40.581 real 0m4.159s 00:16:40.581 user 0m4.143s 00:16:40.581 sys 0m0.758s 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.581 16:54:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 ************************************ 00:16:40.581 END TEST default_locks 00:16:40.581 ************************************ 00:16:40.581 16:54:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:40.581 16:54:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:40.582 16:54:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:40.582 16:54:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.582 16:54:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 ************************************ 00:16:40.582 START TEST default_locks_via_rpc 00:16:40.582 ************************************ 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61365 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61365 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61365 ']' 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.582 16:54:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 [2024-07-22 16:54:41.932899] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:40.582 [2024-07-22 16:54:41.933128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:16:40.582 [2024-07-22 16:54:42.108944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.840 [2024-07-22 16:54:42.366989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61365 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61365 00:16:41.774 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61365 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61365 ']' 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61365 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61365 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:42.032 killing process with pid 61365 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61365' 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61365 00:16:42.032 16:54:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61365 00:16:44.560 00:16:44.560 real 0m4.172s 00:16:44.560 user 0m4.123s 00:16:44.560 sys 0m0.762s 00:16:44.560 ************************************ 00:16:44.560 END TEST default_locks_via_rpc 00:16:44.560 ************************************ 00:16:44.560 16:54:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.560 16:54:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.560 16:54:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:44.560 16:54:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:44.560 16:54:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:44.560 16:54:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.560 16:54:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:44.560 ************************************ 00:16:44.560 START TEST non_locking_app_on_locked_coremask 00:16:44.560 ************************************ 00:16:44.560 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:16:44.560 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61443 00:16:44.560 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61443 /var/tmp/spdk.sock 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61443 ']' 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.561 16:54:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 [2024-07-22 16:54:46.156254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:44.561 [2024-07-22 16:54:46.156482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:16:44.819 [2024-07-22 16:54:46.330931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.077 [2024-07-22 16:54:46.657319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61468 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61468 /var/tmp/spdk2.sock 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61468 ']' 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.014 16:54:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:46.272 [2024-07-22 16:54:47.644666] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:46.272 [2024-07-22 16:54:47.644897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:16:46.272 [2024-07-22 16:54:47.832976] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:46.272 [2024-07-22 16:54:47.833076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.838 [2024-07-22 16:54:48.376544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.745 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.745 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:48.745 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61443 00:16:48.745 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61443 00:16:48.745 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61443 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61443 ']' 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61443 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61443 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.679 killing process with pid 61443 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61443' 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61443 00:16:49.679 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61443 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61468 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61468 ']' 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61468 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61468 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.943 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.943 killing process with pid 61468 00:16:54.944 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61468' 00:16:54.944 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61468 00:16:54.944 16:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61468 00:16:56.846 00:16:56.846 real 0m12.268s 00:16:56.846 user 0m12.785s 00:16:56.846 sys 0m1.600s 00:16:56.846 16:54:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.846 16:54:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:56.846 ************************************ 00:16:56.846 END TEST non_locking_app_on_locked_coremask 00:16:56.846 ************************************ 00:16:56.846 16:54:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:56.846 16:54:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:56.846 16:54:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:56.846 16:54:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.846 16:54:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:56.846 ************************************ 00:16:56.846 START TEST locking_app_on_unlocked_coremask 00:16:56.846 ************************************ 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61623 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61623 /var/tmp/spdk.sock 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61623 ']' 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.846 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 [2024-07-22 16:54:58.478848] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:57.104 [2024-07-22 16:54:58.479062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:16:57.104 [2024-07-22 16:54:58.653555] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:57.104 [2024-07-22 16:54:58.653628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.363 [2024-07-22 16:54:58.936840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61639 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61639 /var/tmp/spdk2.sock 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61639 ']' 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:58.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.296 16:54:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:58.554 [2024-07-22 16:55:00.020247] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:58.554 [2024-07-22 16:55:00.020497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:16:58.811 [2024-07-22 16:55:00.202442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.379 [2024-07-22 16:55:00.726590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.280 16:55:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.280 16:55:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:17:01.280 16:55:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61639 00:17:01.280 16:55:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61639 00:17:01.280 16:55:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61623 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61623 ']' 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61623 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61623 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:01.851 killing process with pid 61623 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61623' 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61623 00:17:01.851 16:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61623 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61639 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61639 ']' 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61639 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61639 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.199 killing process with pid 61639 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61639' 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61639 00:17:07.199 16:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61639 00:17:09.125 00:17:09.125 real 0m12.271s 00:17:09.125 user 0m12.659s 00:17:09.125 sys 0m1.591s 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:09.125 ************************************ 00:17:09.125 END TEST locking_app_on_unlocked_coremask 00:17:09.125 ************************************ 00:17:09.125 16:55:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:17:09.125 16:55:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:09.125 16:55:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:09.125 16:55:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.125 16:55:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:09.125 ************************************ 00:17:09.125 START TEST locking_app_on_locked_coremask 00:17:09.125 ************************************ 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61799 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61799 /var/tmp/spdk.sock 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61799 ']' 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.125 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.126 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.126 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.126 16:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:09.383 [2024-07-22 16:55:10.780604] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:09.384 [2024-07-22 16:55:10.780812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61799 ] 00:17:09.384 [2024-07-22 16:55:10.948810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.642 [2024-07-22 16:55:11.210374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61815 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61815 /var/tmp/spdk2.sock 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61815 /var/tmp/spdk2.sock 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61815 /var/tmp/spdk2.sock 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61815 ']' 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.576 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:10.835 [2024-07-22 16:55:12.213567] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:10.835 [2024-07-22 16:55:12.213778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61815 ] 00:17:10.835 [2024-07-22 16:55:12.388334] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61799 has claimed it. 00:17:10.835 [2024-07-22 16:55:12.392498] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:11.402 ERROR: process (pid: 61815) is no longer running 00:17:11.402 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61815) - No such process 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61799 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61799 00:17:11.402 16:55:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61799 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61799 ']' 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61799 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.660 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61799 00:17:11.918 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:11.918 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:11.918 killing process with pid 61799 00:17:11.918 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61799' 00:17:11.918 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61799 00:17:11.918 16:55:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61799 00:17:14.449 00:17:14.449 real 0m5.037s 00:17:14.449 user 0m5.297s 00:17:14.449 sys 0m0.902s 00:17:14.449 16:55:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.449 16:55:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:14.450 ************************************ 00:17:14.450 END TEST locking_app_on_locked_coremask 00:17:14.450 ************************************ 00:17:14.450 16:55:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:17:14.450 16:55:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:14.450 16:55:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:14.450 16:55:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.450 16:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:14.450 ************************************ 00:17:14.450 START TEST locking_overlapped_coremask 00:17:14.450 ************************************ 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:17:14.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61885 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61885 /var/tmp/spdk.sock 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61885 ']' 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.450 16:55:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:14.450 [2024-07-22 16:55:15.896473] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:14.450 [2024-07-22 16:55:15.896791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:17:14.708 [2024-07-22 16:55:16.077173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.966 [2024-07-22 16:55:16.351226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.966 [2024-07-22 16:55:16.351366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.966 [2024-07-22 16:55:16.351389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61908 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61908 /var/tmp/spdk2.sock 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61908 /var/tmp/spdk2.sock 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61908 /var/tmp/spdk2.sock 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61908 ']' 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.901 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:15.901 [2024-07-22 16:55:17.299223] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:15.901 [2024-07-22 16:55:17.299415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61908 ] 00:17:15.901 [2024-07-22 16:55:17.477514] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61885 has claimed it. 00:17:15.901 [2024-07-22 16:55:17.477904] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:16.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61908) - No such process 00:17:16.469 ERROR: process (pid: 61908) is no longer running 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61885 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61885 ']' 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61885 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61885 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.469 killing process with pid 61885 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61885' 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61885 00:17:16.469 16:55:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61885 00:17:18.999 00:17:18.999 real 0m4.542s 00:17:18.999 user 0m11.576s 00:17:18.999 sys 0m0.693s 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 ************************************ 00:17:18.999 END TEST locking_overlapped_coremask 00:17:18.999 ************************************ 00:17:18.999 16:55:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:17:18.999 16:55:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:18.999 16:55:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:18.999 16:55:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.999 16:55:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 ************************************ 00:17:18.999 START TEST locking_overlapped_coremask_via_rpc 00:17:18.999 ************************************ 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61972 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61972 /var/tmp/spdk.sock 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61972 ']' 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:18.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.999 16:55:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 [2024-07-22 16:55:20.468673] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:18.999 [2024-07-22 16:55:20.468878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61972 ] 00:17:19.258 [2024-07-22 16:55:20.630575] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:19.258 [2024-07-22 16:55:20.630642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.515 [2024-07-22 16:55:20.897360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.515 [2024-07-22 16:55:20.897480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.515 [2024-07-22 16:55:20.897506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61990 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61990 /var/tmp/spdk2.sock 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61990 ']' 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.449 16:55:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.449 [2024-07-22 16:55:21.839082] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:20.449 [2024-07-22 16:55:21.839295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61990 ] 00:17:20.449 [2024-07-22 16:55:22.009463] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:20.449 [2024-07-22 16:55:22.013296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.014 [2024-07-22 16:55:22.555689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.014 [2024-07-22 16:55:22.559372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.014 [2024-07-22 16:55:22.559388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.915 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.915 [2024-07-22 16:55:24.524502] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61972 has claimed it. 00:17:23.173 request: 00:17:23.173 { 00:17:23.173 "method": "framework_enable_cpumask_locks", 00:17:23.173 "req_id": 1 00:17:23.173 } 00:17:23.173 Got JSON-RPC error response 00:17:23.173 response: 00:17:23.173 { 00:17:23.173 "code": -32603, 00:17:23.173 "message": "Failed to claim CPU core: 2" 00:17:23.173 } 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61972 /var/tmp/spdk.sock 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61972 ']' 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.173 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61990 /var/tmp/spdk2.sock 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61990 ']' 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.431 16:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:23.690 00:17:23.690 real 0m4.798s 00:17:23.690 user 0m1.653s 00:17:23.690 sys 0m0.258s 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.690 ************************************ 00:17:23.690 END TEST locking_overlapped_coremask_via_rpc 00:17:23.690 ************************************ 00:17:23.690 16:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:17:23.690 16:55:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:17:23.690 16:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61972 ]] 00:17:23.690 16:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61972 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61972 ']' 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61972 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61972 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61972' 00:17:23.690 killing process with pid 61972 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61972 00:17:23.690 16:55:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61972 00:17:26.222 16:55:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61990 ]] 00:17:26.222 16:55:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61990 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61990 ']' 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61990 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61990 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61990' 00:17:26.222 killing process with pid 61990 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61990 00:17:26.222 16:55:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61990 00:17:28.751 16:55:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:28.751 16:55:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:28.751 16:55:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61972 ]] 00:17:28.751 16:55:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61972 00:17:28.751 16:55:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61972 ']' 00:17:28.751 16:55:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61972 00:17:28.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61972) - No such process 00:17:28.752 Process with pid 61972 is not found 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61972 is not found' 00:17:28.752 16:55:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61990 ]] 00:17:28.752 16:55:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61990 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61990 ']' 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61990 00:17:28.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61990) - No such process 00:17:28.752 Process with pid 61990 is not found 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61990 is not found' 00:17:28.752 16:55:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:28.752 00:17:28.752 real 0m52.456s 00:17:28.752 user 1m27.279s 00:17:28.752 sys 0m7.808s 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.752 16:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:28.752 ************************************ 00:17:28.752 END TEST cpu_locks 00:17:28.752 ************************************ 00:17:28.752 16:55:29 event -- common/autotest_common.sh@1142 -- # return 0 00:17:28.752 00:17:28.752 real 1m24.815s 00:17:28.752 user 2m28.097s 00:17:28.752 sys 0m11.953s 00:17:28.752 16:55:29 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.752 16:55:29 event -- common/autotest_common.sh@10 -- # set +x 00:17:28.752 ************************************ 00:17:28.752 END TEST event 00:17:28.752 ************************************ 00:17:28.752 16:55:29 -- common/autotest_common.sh@1142 -- # return 0 00:17:28.752 16:55:29 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:28.752 16:55:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:28.752 16:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.752 16:55:29 -- common/autotest_common.sh@10 -- # set +x 00:17:28.752 ************************************ 00:17:28.752 START TEST thread 00:17:28.752 ************************************ 00:17:28.752 16:55:30 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:28.752 * Looking for test storage... 00:17:28.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:28.752 16:55:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:28.752 16:55:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:17:28.752 16:55:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.752 16:55:30 thread -- common/autotest_common.sh@10 -- # set +x 00:17:28.752 ************************************ 00:17:28.752 START TEST thread_poller_perf 00:17:28.752 ************************************ 00:17:28.752 16:55:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:28.752 [2024-07-22 16:55:30.140717] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:28.752 [2024-07-22 16:55:30.140925] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62183 ] 00:17:28.752 [2024-07-22 16:55:30.311348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.011 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:29.011 [2024-07-22 16:55:30.578439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.929 ====================================== 00:17:30.929 busy:2211238382 (cyc) 00:17:30.929 total_run_count: 287000 00:17:30.929 tsc_hz: 2200000000 (cyc) 00:17:30.929 ====================================== 00:17:30.929 poller_cost: 7704 (cyc), 3501 (nsec) 00:17:30.929 00:17:30.929 real 0m1.932s 00:17:30.929 user 0m1.698s 00:17:30.929 sys 0m0.123s 00:17:30.929 16:55:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.929 16:55:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:30.929 ************************************ 00:17:30.929 END TEST thread_poller_perf 00:17:30.930 ************************************ 00:17:30.930 16:55:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:17:30.930 16:55:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:30.930 16:55:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:17:30.930 16:55:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.930 16:55:32 thread -- common/autotest_common.sh@10 -- # set +x 00:17:30.930 ************************************ 00:17:30.930 START TEST thread_poller_perf 00:17:30.930 ************************************ 00:17:30.930 16:55:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:30.930 [2024-07-22 16:55:32.126704] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:30.930 [2024-07-22 16:55:32.126918] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:17:30.930 [2024-07-22 16:55:32.302921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.187 [2024-07-22 16:55:32.576382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.187 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:32.588 ====================================== 00:17:32.588 busy:2204260602 (cyc) 00:17:32.588 total_run_count: 3683000 00:17:32.588 tsc_hz: 2200000000 (cyc) 00:17:32.588 ====================================== 00:17:32.588 poller_cost: 598 (cyc), 271 (nsec) 00:17:32.588 00:17:32.588 real 0m1.932s 00:17:32.588 user 0m1.692s 00:17:32.588 sys 0m0.129s 00:17:32.588 16:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.588 16:55:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:32.588 ************************************ 00:17:32.588 END TEST thread_poller_perf 00:17:32.588 ************************************ 00:17:32.588 16:55:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:17:32.588 16:55:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:32.588 00:17:32.588 real 0m4.048s 00:17:32.588 user 0m3.463s 00:17:32.588 sys 0m0.361s 00:17:32.588 16:55:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.588 16:55:34 thread -- common/autotest_common.sh@10 -- # set +x 00:17:32.588 ************************************ 00:17:32.588 END TEST thread 00:17:32.588 ************************************ 00:17:32.588 16:55:34 -- common/autotest_common.sh@1142 -- # return 0 00:17:32.589 16:55:34 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:32.589 16:55:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:32.589 16:55:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.589 16:55:34 -- common/autotest_common.sh@10 -- # set +x 00:17:32.589 ************************************ 00:17:32.589 START TEST accel 00:17:32.589 ************************************ 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:32.589 * Looking for test storage... 00:17:32.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:32.589 16:55:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:17:32.589 16:55:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:17:32.589 16:55:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:32.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.589 16:55:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62306 00:17:32.589 16:55:34 accel -- accel/accel.sh@63 -- # waitforlisten 62306 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@829 -- # '[' -z 62306 ']' 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.589 16:55:34 accel -- common/autotest_common.sh@10 -- # set +x 00:17:32.589 16:55:34 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:17:32.589 16:55:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:17:32.589 16:55:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:32.589 16:55:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:32.589 16:55:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:32.589 16:55:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:32.589 16:55:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:32.589 16:55:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:17:32.589 16:55:34 accel -- accel/accel.sh@41 -- # jq -r . 00:17:32.847 [2024-07-22 16:55:34.353019] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:32.847 [2024-07-22 16:55:34.353267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62306 ] 00:17:33.106 [2024-07-22 16:55:34.530359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.365 [2024-07-22 16:55:34.832650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@862 -- # return 0 00:17:34.300 16:55:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:17:34.300 16:55:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:17:34.300 16:55:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:17:34.300 16:55:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:17:34.300 16:55:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:17:34.300 16:55:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@10 -- # set +x 00:17:34.300 16:55:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # IFS== 00:17:34.300 16:55:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:17:34.300 16:55:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:34.300 16:55:35 accel -- accel/accel.sh@75 -- # killprocess 62306 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@948 -- # '[' -z 62306 ']' 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@952 -- # kill -0 62306 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@953 -- # uname 00:17:34.300 16:55:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62306 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.301 killing process with pid 62306 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62306' 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@967 -- # kill 62306 00:17:34.301 16:55:35 accel -- common/autotest_common.sh@972 -- # wait 62306 00:17:36.829 16:55:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:17:36.829 16:55:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:17:36.829 16:55:38 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:36.829 16:55:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.829 16:55:38 accel -- common/autotest_common.sh@10 -- # set +x 00:17:36.829 16:55:38 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:17:36.829 16:55:38 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:17:36.829 16:55:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:17:36.829 16:55:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:36.829 16:55:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:36.829 16:55:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:36.830 16:55:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:36.830 16:55:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:36.830 16:55:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:17:36.830 16:55:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:17:37.087 16:55:38 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.087 16:55:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:17:37.087 16:55:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:37.087 16:55:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:17:37.087 16:55:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:37.087 16:55:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.087 16:55:38 accel -- common/autotest_common.sh@10 -- # set +x 00:17:37.087 ************************************ 00:17:37.087 START TEST accel_missing_filename 00:17:37.087 ************************************ 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.087 16:55:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:17:37.087 16:55:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:17:37.087 16:55:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:17:37.088 16:55:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:17:37.088 [2024-07-22 16:55:38.605209] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:37.088 [2024-07-22 16:55:38.605444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62387 ] 00:17:37.345 [2024-07-22 16:55:38.780531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.604 [2024-07-22 16:55:39.047049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.862 [2024-07-22 16:55:39.289169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:38.437 [2024-07-22 16:55:39.835185] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:17:38.696 A filename is required. 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.696 00:17:38.696 real 0m1.754s 00:17:38.696 user 0m1.444s 00:17:38.696 sys 0m0.240s 00:17:38.696 ************************************ 00:17:38.696 END TEST accel_missing_filename 00:17:38.696 ************************************ 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.696 16:55:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:17:38.955 16:55:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:38.955 16:55:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:38.955 16:55:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:17:38.955 16:55:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.955 16:55:40 accel -- common/autotest_common.sh@10 -- # set +x 00:17:38.955 ************************************ 00:17:38.955 START TEST accel_compress_verify 00:17:38.955 ************************************ 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.955 16:55:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:17:38.955 16:55:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:17:38.955 [2024-07-22 16:55:40.413925] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:38.955 [2024-07-22 16:55:40.414165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62429 ] 00:17:39.214 [2024-07-22 16:55:40.586223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.472 [2024-07-22 16:55:40.835371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.472 [2024-07-22 16:55:41.075711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:40.040 [2024-07-22 16:55:41.632783] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:17:40.609 00:17:40.609 Compression does not support the verify option, aborting. 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.609 00:17:40.609 real 0m1.734s 00:17:40.609 user 0m1.429s 00:17:40.609 sys 0m0.246s 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.609 16:55:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 ************************************ 00:17:40.609 END TEST accel_compress_verify 00:17:40.609 ************************************ 00:17:40.609 16:55:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:40.609 16:55:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:17:40.609 16:55:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:40.609 16:55:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.609 16:55:42 accel -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 ************************************ 00:17:40.609 START TEST accel_wrong_workload 00:17:40.609 ************************************ 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:17:40.609 16:55:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:17:40.609 Unsupported workload type: foobar 00:17:40.609 [2024-07-22 16:55:42.190474] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:17:40.609 accel_perf options: 00:17:40.609 [-h help message] 00:17:40.609 [-q queue depth per core] 00:17:40.609 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:40.609 [-T number of threads per core 00:17:40.609 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:40.609 [-t time in seconds] 00:17:40.609 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:40.609 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:17:40.609 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:40.609 [-l for compress/decompress workloads, name of uncompressed input file 00:17:40.609 [-S for crc32c workload, use this seed value (default 0) 00:17:40.609 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:40.609 [-f for fill workload, use this BYTE value (default 255) 00:17:40.609 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:40.609 [-y verify result if this switch is on] 00:17:40.609 [-a tasks to allocate per core (default: same value as -q)] 00:17:40.609 Can be used to spread operations across a wider range of memory. 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.609 00:17:40.609 real 0m0.076s 00:17:40.609 user 0m0.077s 00:17:40.609 sys 0m0.040s 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.609 16:55:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 ************************************ 00:17:40.609 END TEST accel_wrong_workload 00:17:40.609 ************************************ 00:17:40.870 16:55:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:40.870 16:55:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:17:40.870 16:55:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:17:40.870 16:55:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.870 16:55:42 accel -- common/autotest_common.sh@10 -- # set +x 00:17:40.870 ************************************ 00:17:40.870 START TEST accel_negative_buffers 00:17:40.870 ************************************ 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:17:40.870 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:17:40.871 16:55:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:17:40.871 -x option must be non-negative. 00:17:40.871 [2024-07-22 16:55:42.315070] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:17:40.871 accel_perf options: 00:17:40.871 [-h help message] 00:17:40.871 [-q queue depth per core] 00:17:40.871 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:40.871 [-T number of threads per core 00:17:40.871 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:40.871 [-t time in seconds] 00:17:40.871 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:40.871 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:17:40.871 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:40.871 [-l for compress/decompress workloads, name of uncompressed input file 00:17:40.871 [-S for crc32c workload, use this seed value (default 0) 00:17:40.871 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:40.871 [-f for fill workload, use this BYTE value (default 255) 00:17:40.871 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:40.871 [-y verify result if this switch is on] 00:17:40.871 [-a tasks to allocate per core (default: same value as -q)] 00:17:40.871 Can be used to spread operations across a wider range of memory. 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.871 00:17:40.871 real 0m0.076s 00:17:40.871 user 0m0.089s 00:17:40.871 sys 0m0.033s 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.871 16:55:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:17:40.871 ************************************ 00:17:40.871 END TEST accel_negative_buffers 00:17:40.871 ************************************ 00:17:40.871 16:55:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:40.871 16:55:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:17:40.871 16:55:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:40.871 16:55:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.871 16:55:42 accel -- common/autotest_common.sh@10 -- # set +x 00:17:40.871 ************************************ 00:17:40.871 START TEST accel_crc32c 00:17:40.871 ************************************ 00:17:40.871 16:55:42 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:17:40.871 16:55:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:17:40.871 [2024-07-22 16:55:42.444827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:40.871 [2024-07-22 16:55:42.445014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62507 ] 00:17:41.131 [2024-07-22 16:55:42.625195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.390 [2024-07-22 16:55:42.924121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.649 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:41.650 16:55:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.180 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:44.181 16:55:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:44.181 00:17:44.181 real 0m2.789s 00:17:44.181 user 0m2.437s 00:17:44.181 sys 0m0.254s 00:17:44.181 16:55:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.181 16:55:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:17:44.181 ************************************ 00:17:44.181 END TEST accel_crc32c 00:17:44.181 ************************************ 00:17:44.181 16:55:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:44.181 16:55:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:17:44.181 16:55:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:44.181 16:55:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.181 16:55:45 accel -- common/autotest_common.sh@10 -- # set +x 00:17:44.181 ************************************ 00:17:44.181 START TEST accel_crc32c_C2 00:17:44.181 ************************************ 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:17:44.181 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:17:44.181 [2024-07-22 16:55:45.305065] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:44.181 [2024-07-22 16:55:45.305313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62550 ] 00:17:44.181 [2024-07-22 16:55:45.481047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.181 [2024-07-22 16:55:45.754781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:44.440 16:55:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:46.343 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:46.602 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:46.602 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:46.602 16:55:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:46.602 00:17:46.602 real 0m2.726s 00:17:46.602 user 0m2.391s 00:17:46.602 sys 0m0.235s 00:17:46.602 16:55:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.602 16:55:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:17:46.602 ************************************ 00:17:46.602 END TEST accel_crc32c_C2 00:17:46.602 ************************************ 00:17:46.602 16:55:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:46.602 16:55:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:17:46.602 16:55:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:46.602 16:55:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.602 16:55:48 accel -- common/autotest_common.sh@10 -- # set +x 00:17:46.602 ************************************ 00:17:46.602 START TEST accel_copy 00:17:46.602 ************************************ 00:17:46.602 16:55:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:17:46.602 16:55:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:17:46.602 [2024-07-22 16:55:48.066358] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:46.602 [2024-07-22 16:55:48.066530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62600 ] 00:17:46.861 [2024-07-22 16:55:48.240586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.120 [2024-07-22 16:55:48.520733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:47.379 16:55:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:17:49.283 16:55:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:49.283 00:17:49.283 real 0m2.720s 00:17:49.283 user 0m2.400s 00:17:49.283 sys 0m0.220s 00:17:49.283 16:55:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.283 16:55:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 ************************************ 00:17:49.283 END TEST accel_copy 00:17:49.283 ************************************ 00:17:49.283 16:55:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:49.283 16:55:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:49.283 16:55:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:49.283 16:55:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.283 16:55:50 accel -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 ************************************ 00:17:49.283 START TEST accel_fill 00:17:49.283 ************************************ 00:17:49.283 16:55:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:17:49.283 16:55:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:17:49.284 [2024-07-22 16:55:50.834971] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:49.284 [2024-07-22 16:55:50.835141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62648 ] 00:17:49.542 [2024-07-22 16:55:51.011189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.802 [2024-07-22 16:55:51.267270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.061 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:50.062 16:55:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:17:51.966 16:55:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:51.966 00:17:51.966 real 0m2.704s 00:17:51.966 user 0m2.367s 00:17:51.966 sys 0m0.238s 00:17:51.966 16:55:53 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.966 16:55:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:17:51.966 ************************************ 00:17:51.966 END TEST accel_fill 00:17:51.966 ************************************ 00:17:51.966 16:55:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:51.966 16:55:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:17:51.966 16:55:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:51.966 16:55:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.966 16:55:53 accel -- common/autotest_common.sh@10 -- # set +x 00:17:51.966 ************************************ 00:17:51.966 START TEST accel_copy_crc32c 00:17:51.966 ************************************ 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:17:51.966 16:55:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:17:52.224 [2024-07-22 16:55:53.585899] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:52.224 [2024-07-22 16:55:53.586084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62699 ] 00:17:52.224 [2024-07-22 16:55:53.753360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.484 [2024-07-22 16:55:54.013924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.743 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:52.744 16:55:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:54.647 ************************************ 00:17:54.647 END TEST accel_copy_crc32c 00:17:54.647 ************************************ 00:17:54.647 00:17:54.647 real 0m2.701s 00:17:54.647 user 0m2.374s 00:17:54.647 sys 0m0.230s 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.647 16:55:56 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:17:54.905 16:55:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:54.905 16:55:56 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:17:54.905 16:55:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:17:54.905 16:55:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.905 16:55:56 accel -- common/autotest_common.sh@10 -- # set +x 00:17:54.905 ************************************ 00:17:54.905 START TEST accel_copy_crc32c_C2 00:17:54.905 ************************************ 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:17:54.905 16:55:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:17:54.905 [2024-07-22 16:55:56.341303] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:54.905 [2024-07-22 16:55:56.341486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62745 ] 00:17:54.905 [2024-07-22 16:55:56.514612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.163 [2024-07-22 16:55:56.766242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:55.422 16:55:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.324 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.582 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.583 00:17:57.583 real 0m2.664s 00:17:57.583 user 0m0.018s 00:17:57.583 sys 0m0.003s 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.583 16:55:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:17:57.583 ************************************ 00:17:57.583 END TEST accel_copy_crc32c_C2 00:17:57.583 ************************************ 00:17:57.583 16:55:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:17:57.583 16:55:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:17:57.583 16:55:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:57.583 16:55:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.583 16:55:58 accel -- common/autotest_common.sh@10 -- # set +x 00:17:57.583 ************************************ 00:17:57.583 START TEST accel_dualcast 00:17:57.583 ************************************ 00:17:57.583 16:55:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:17:57.583 16:55:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:17:57.583 [2024-07-22 16:55:59.051733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:57.583 [2024-07-22 16:55:59.051884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62792 ] 00:17:57.841 [2024-07-22 16:55:59.214363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.123 [2024-07-22 16:55:59.468585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:17:58.123 16:55:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:00.025 16:56:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:00.026 16:56:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:00.026 16:56:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:18:00.026 16:56:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:00.026 00:18:00.026 real 0m2.626s 00:18:00.026 user 0m2.306s 00:18:00.026 sys 0m0.224s 00:18:00.026 16:56:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.026 16:56:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:18:00.026 ************************************ 00:18:00.026 END TEST accel_dualcast 00:18:00.026 ************************************ 00:18:00.284 16:56:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:00.284 16:56:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:18:00.284 16:56:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:00.284 16:56:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.284 16:56:01 accel -- common/autotest_common.sh@10 -- # set +x 00:18:00.284 ************************************ 00:18:00.284 START TEST accel_compare 00:18:00.284 ************************************ 00:18:00.284 16:56:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:18:00.284 16:56:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:18:00.284 [2024-07-22 16:56:01.737315] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:00.284 [2024-07-22 16:56:01.737529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62844 ] 00:18:00.543 [2024-07-22 16:56:01.911847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.801 [2024-07-22 16:56:02.167968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:18:00.801 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:00.802 16:56:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:18:03.362 16:56:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:03.362 00:18:03.362 real 0m2.679s 00:18:03.362 user 0m2.346s 00:18:03.362 sys 0m0.236s 00:18:03.362 16:56:04 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.362 ************************************ 00:18:03.362 END TEST accel_compare 00:18:03.362 ************************************ 00:18:03.362 16:56:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:18:03.362 16:56:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:03.362 16:56:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:18:03.362 16:56:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:03.362 16:56:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.362 16:56:04 accel -- common/autotest_common.sh@10 -- # set +x 00:18:03.362 ************************************ 00:18:03.362 START TEST accel_xor 00:18:03.362 ************************************ 00:18:03.362 16:56:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:18:03.362 16:56:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:18:03.362 [2024-07-22 16:56:04.457823] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:03.362 [2024-07-22 16:56:04.457976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62885 ] 00:18:03.362 [2024-07-22 16:56:04.616542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.362 [2024-07-22 16:56:04.875643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.621 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:03.622 16:56:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.523 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.523 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:05.524 00:18:05.524 real 0m2.639s 00:18:05.524 user 0m2.332s 00:18:05.524 sys 0m0.207s 00:18:05.524 ************************************ 00:18:05.524 END TEST accel_xor 00:18:05.524 ************************************ 00:18:05.524 16:56:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.524 16:56:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:18:05.524 16:56:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:05.524 16:56:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:18:05.524 16:56:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:18:05.524 16:56:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.524 16:56:07 accel -- common/autotest_common.sh@10 -- # set +x 00:18:05.524 ************************************ 00:18:05.524 START TEST accel_xor 00:18:05.524 ************************************ 00:18:05.524 16:56:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:18:05.524 16:56:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:18:05.783 [2024-07-22 16:56:07.152856] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:05.783 [2024-07-22 16:56:07.153250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62937 ] 00:18:05.783 [2024-07-22 16:56:07.326889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.041 [2024-07-22 16:56:07.575786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.300 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:06.301 16:56:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:08.203 16:56:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:08.203 ************************************ 00:18:08.203 END TEST accel_xor 00:18:08.203 ************************************ 00:18:08.203 00:18:08.203 real 0m2.646s 00:18:08.203 user 0m2.312s 00:18:08.203 sys 0m0.238s 00:18:08.203 16:56:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.203 16:56:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:18:08.203 16:56:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:08.203 16:56:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:18:08.203 16:56:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:18:08.203 16:56:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.203 16:56:09 accel -- common/autotest_common.sh@10 -- # set +x 00:18:08.203 ************************************ 00:18:08.203 START TEST accel_dif_verify 00:18:08.203 ************************************ 00:18:08.203 16:56:09 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:18:08.203 16:56:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:18:08.462 [2024-07-22 16:56:09.844328] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:08.462 [2024-07-22 16:56:09.844534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:18:08.462 [2024-07-22 16:56:10.022259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.720 [2024-07-22 16:56:10.274466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:08.978 16:56:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:18:11.509 16:56:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:11.509 00:18:11.509 real 0m2.719s 00:18:11.509 user 0m2.395s 00:18:11.509 sys 0m0.226s 00:18:11.509 16:56:12 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.509 16:56:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 ************************************ 00:18:11.509 END TEST accel_dif_verify 00:18:11.509 ************************************ 00:18:11.509 16:56:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:11.509 16:56:12 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:18:11.509 16:56:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:18:11.509 16:56:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.509 16:56:12 accel -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 ************************************ 00:18:11.509 START TEST accel_dif_generate 00:18:11.509 ************************************ 00:18:11.509 16:56:12 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:18:11.509 16:56:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:18:11.509 16:56:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:18:11.509 16:56:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.509 16:56:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.509 16:56:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:18:11.510 16:56:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:18:11.510 [2024-07-22 16:56:12.613746] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:11.510 [2024-07-22 16:56:12.613907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:18:11.510 [2024-07-22 16:56:12.786250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.510 [2024-07-22 16:56:13.047498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:11.768 16:56:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:18:13.671 16:56:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:13.671 00:18:13.671 real 0m2.701s 00:18:13.671 user 0m2.376s 00:18:13.671 sys 0m0.226s 00:18:13.671 16:56:15 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.671 ************************************ 00:18:13.671 END TEST accel_dif_generate 00:18:13.671 ************************************ 00:18:13.671 16:56:15 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:18:13.929 16:56:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:13.929 16:56:15 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:18:13.929 16:56:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:18:13.929 16:56:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.929 16:56:15 accel -- common/autotest_common.sh@10 -- # set +x 00:18:13.929 ************************************ 00:18:13.929 START TEST accel_dif_generate_copy 00:18:13.929 ************************************ 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:18:13.929 16:56:15 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:18:13.929 [2024-07-22 16:56:15.363155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:13.929 [2024-07-22 16:56:15.363388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:18:14.187 [2024-07-22 16:56:15.546783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.445 [2024-07-22 16:56:15.823857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.703 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:14.704 16:56:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:16.606 00:18:16.606 real 0m2.739s 00:18:16.606 user 0m2.404s 00:18:16.606 sys 0m0.241s 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.606 ************************************ 00:18:16.606 END TEST accel_dif_generate_copy 00:18:16.606 ************************************ 00:18:16.606 16:56:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:18:16.606 16:56:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:16.606 16:56:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:18:16.606 16:56:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:16.606 16:56:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:18:16.606 16:56:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.606 16:56:18 accel -- common/autotest_common.sh@10 -- # set +x 00:18:16.606 ************************************ 00:18:16.606 START TEST accel_comp 00:18:16.606 ************************************ 00:18:16.606 16:56:18 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:18:16.606 16:56:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:18:16.606 [2024-07-22 16:56:18.145182] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:16.606 [2024-07-22 16:56:18.145338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63129 ] 00:18:16.865 [2024-07-22 16:56:18.311408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.123 [2024-07-22 16:56:18.581868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.382 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:17.383 16:56:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.285 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:18:19.286 16:56:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:19.286 00:18:19.286 real 0m2.647s 00:18:19.286 user 0m2.318s 00:18:19.286 sys 0m0.233s 00:18:19.286 16:56:20 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.286 16:56:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:18:19.286 ************************************ 00:18:19.286 END TEST accel_comp 00:18:19.286 ************************************ 00:18:19.286 16:56:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:19.286 16:56:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:19.286 16:56:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:18:19.286 16:56:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.286 16:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:18:19.286 ************************************ 00:18:19.286 START TEST accel_decomp 00:18:19.286 ************************************ 00:18:19.286 16:56:20 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:18:19.286 16:56:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:18:19.286 [2024-07-22 16:56:20.850449] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:19.286 [2024-07-22 16:56:20.850649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63175 ] 00:18:19.544 [2024-07-22 16:56:21.024851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.802 [2024-07-22 16:56:21.281052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.060 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:20.061 16:56:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:22.005 16:56:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:22.005 00:18:22.005 real 0m2.638s 00:18:22.005 user 0m2.305s 00:18:22.005 sys 0m0.239s 00:18:22.005 16:56:23 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.005 ************************************ 00:18:22.005 END TEST accel_decomp 00:18:22.005 ************************************ 00:18:22.005 16:56:23 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:18:22.005 16:56:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:22.005 16:56:23 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:22.005 16:56:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:18:22.005 16:56:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.005 16:56:23 accel -- common/autotest_common.sh@10 -- # set +x 00:18:22.005 ************************************ 00:18:22.005 START TEST accel_decomp_full 00:18:22.005 ************************************ 00:18:22.005 16:56:23 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:18:22.005 16:56:23 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:18:22.005 [2024-07-22 16:56:23.540881] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:22.005 [2024-07-22 16:56:23.541088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63222 ] 00:18:22.264 [2024-07-22 16:56:23.718586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.522 [2024-07-22 16:56:23.965914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.781 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:22.782 16:56:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:24.687 16:56:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:24.687 00:18:24.687 real 0m2.628s 00:18:24.687 user 0m2.305s 00:18:24.687 sys 0m0.228s 00:18:24.687 16:56:26 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:24.687 16:56:26 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:18:24.687 ************************************ 00:18:24.687 END TEST accel_decomp_full 00:18:24.687 ************************************ 00:18:24.687 16:56:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:24.687 16:56:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:24.687 16:56:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:18:24.687 16:56:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.687 16:56:26 accel -- common/autotest_common.sh@10 -- # set +x 00:18:24.687 ************************************ 00:18:24.687 START TEST accel_decomp_mcore 00:18:24.687 ************************************ 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:18:24.687 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:18:24.687 [2024-07-22 16:56:26.214950] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:24.687 [2024-07-22 16:56:26.215164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63274 ] 00:18:24.945 [2024-07-22 16:56:26.395303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.203 [2024-07-22 16:56:26.664785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.203 [2024-07-22 16:56:26.664928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.203 [2024-07-22 16:56:26.665071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.203 [2024-07-22 16:56:26.665201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:25.462 16:56:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:27.378 00:18:27.378 real 0m2.683s 00:18:27.378 user 0m7.595s 00:18:27.378 sys 0m0.273s 00:18:27.378 ************************************ 00:18:27.378 END TEST accel_decomp_mcore 00:18:27.378 ************************************ 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.378 16:56:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:18:27.378 16:56:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:27.378 16:56:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:27.378 16:56:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:27.379 16:56:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.379 16:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:18:27.379 ************************************ 00:18:27.379 START TEST accel_decomp_full_mcore 00:18:27.379 ************************************ 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:18:27.379 16:56:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:18:27.379 [2024-07-22 16:56:28.936024] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:27.379 [2024-07-22 16:56:28.936176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63318 ] 00:18:27.676 [2024-07-22 16:56:29.106263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.934 [2024-07-22 16:56:29.368832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.934 [2024-07-22 16:56:29.368968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.934 [2024-07-22 16:56:29.369047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.934 [2024-07-22 16:56:29.369253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.192 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:28.193 16:56:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:30.093 00:18:30.093 real 0m2.761s 00:18:30.093 user 0m0.018s 00:18:30.093 sys 0m0.004s 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:30.093 ************************************ 00:18:30.093 END TEST accel_decomp_full_mcore 00:18:30.093 ************************************ 00:18:30.093 16:56:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:18:30.093 16:56:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:30.093 16:56:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:30.093 16:56:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:18:30.093 16:56:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.093 16:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:18:30.093 ************************************ 00:18:30.093 START TEST accel_decomp_mthread 00:18:30.093 ************************************ 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:18:30.093 16:56:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:18:30.352 [2024-07-22 16:56:31.757074] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:30.352 [2024-07-22 16:56:31.757242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:18:30.352 [2024-07-22 16:56:31.930693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.610 [2024-07-22 16:56:32.189622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:30.869 16:56:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:33.402 00:18:33.402 real 0m2.697s 00:18:33.402 user 0m0.013s 00:18:33.402 sys 0m0.006s 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.402 ************************************ 00:18:33.402 END TEST accel_decomp_mthread 00:18:33.402 ************************************ 00:18:33.402 16:56:34 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:18:33.402 16:56:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:33.402 16:56:34 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:33.402 16:56:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:33.402 16:56:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.402 16:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:18:33.402 ************************************ 00:18:33.402 START TEST accel_decomp_full_mthread 00:18:33.402 ************************************ 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:18:33.402 16:56:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:18:33.402 [2024-07-22 16:56:34.497840] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:33.402 [2024-07-22 16:56:34.498005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63420 ] 00:18:33.402 [2024-07-22 16:56:34.660250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.402 [2024-07-22 16:56:34.919683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:33.661 16:56:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:35.604 00:18:35.604 real 0m2.628s 00:18:35.604 user 0m2.321s 00:18:35.604 sys 0m0.211s 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.604 16:56:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:18:35.604 ************************************ 00:18:35.604 END TEST accel_decomp_full_mthread 00:18:35.604 ************************************ 00:18:35.604 16:56:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:35.604 16:56:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:18:35.604 16:56:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:35.604 16:56:37 accel -- accel/accel.sh@137 -- # build_accel_config 00:18:35.604 16:56:37 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:35.604 16:56:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.604 16:56:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:35.604 16:56:37 accel -- common/autotest_common.sh@10 -- # set +x 00:18:35.604 16:56:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:35.604 16:56:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:35.604 16:56:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:35.604 16:56:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:35.604 16:56:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:18:35.604 16:56:37 accel -- accel/accel.sh@41 -- # jq -r . 00:18:35.604 ************************************ 00:18:35.604 START TEST accel_dif_functional_tests 00:18:35.604 ************************************ 00:18:35.604 16:56:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:35.862 [2024-07-22 16:56:37.248511] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:35.862 [2024-07-22 16:56:37.248748] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63467 ] 00:18:35.862 [2024-07-22 16:56:37.412147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.120 [2024-07-22 16:56:37.684718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.120 [2024-07-22 16:56:37.684824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.120 [2024-07-22 16:56:37.684830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.688 00:18:36.688 00:18:36.688 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.688 http://cunit.sourceforge.net/ 00:18:36.688 00:18:36.688 00:18:36.688 Suite: accel_dif 00:18:36.688 Test: verify: DIF generated, GUARD check ...passed 00:18:36.688 Test: verify: DIF generated, APPTAG check ...passed 00:18:36.688 Test: verify: DIF generated, REFTAG check ...passed 00:18:36.688 Test: verify: DIF not generated, GUARD check ...[2024-07-22 16:56:38.038046] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:36.688 passed 00:18:36.688 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 16:56:38.038461] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:36.688 passed 00:18:36.688 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 16:56:38.038692] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:36.688 passed 00:18:36.688 Test: verify: APPTAG correct, APPTAG check ...passed 00:18:36.688 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 16:56:38.039149] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:18:36.688 passed 00:18:36.688 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:18:36.688 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:18:36.688 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:18:36.688 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:18:36.688 Test: verify copy: DIF generated, GUARD check ...[2024-07-22 16:56:38.039673] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:18:36.688 passed 00:18:36.688 Test: verify copy: DIF generated, APPTAG check ...passed 00:18:36.688 Test: verify copy: DIF generated, REFTAG check ...passed 00:18:36.688 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 16:56:38.040216] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:36.688 passed 00:18:36.688 Test: verify copy: DIF not generated, APPTAG check ...passed 00:18:36.688 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 16:56:38.040435] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:36.688 [2024-07-22 16:56:38.040628] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:36.688 passed 00:18:36.688 Test: generate copy: DIF generated, GUARD check ...passed 00:18:36.688 Test: generate copy: DIF generated, APTTAG check ...passed 00:18:36.688 Test: generate copy: DIF generated, REFTAG check ...passed 00:18:36.688 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:18:36.688 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:18:36.688 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:18:36.688 Test: generate copy: iovecs-len validate ...[2024-07-22 16:56:38.041595] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:18:36.688 passed 00:18:36.688 Test: generate copy: buffer alignment validate ...passed 00:18:36.688 00:18:36.688 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.688 suites 1 1 n/a 0 0 00:18:36.688 tests 26 26 26 0 0 00:18:36.688 asserts 115 115 115 0 n/a 00:18:36.688 00:18:36.688 Elapsed time = 0.009 seconds 00:18:38.062 00:18:38.062 real 0m2.179s 00:18:38.062 user 0m4.101s 00:18:38.062 sys 0m0.310s 00:18:38.062 16:56:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.062 16:56:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 ************************************ 00:18:38.062 END TEST accel_dif_functional_tests 00:18:38.062 ************************************ 00:18:38.062 16:56:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:18:38.062 00:18:38.062 real 1m5.247s 00:18:38.062 user 1m9.646s 00:18:38.062 sys 0m7.031s 00:18:38.062 16:56:39 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.062 ************************************ 00:18:38.062 END TEST accel 00:18:38.062 ************************************ 00:18:38.062 16:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 16:56:39 -- common/autotest_common.sh@1142 -- # return 0 00:18:38.062 16:56:39 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:38.062 16:56:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:38.062 16:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.062 16:56:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 ************************************ 00:18:38.062 START TEST accel_rpc 00:18:38.062 ************************************ 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:38.062 * Looking for test storage... 00:18:38.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:18:38.062 16:56:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:38.062 16:56:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63555 00:18:38.062 16:56:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63555 00:18:38.062 16:56:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63555 ']' 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.062 16:56:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 [2024-07-22 16:56:39.658527] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:38.062 [2024-07-22 16:56:39.659056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63555 ] 00:18:38.320 [2024-07-22 16:56:39.835577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.628 [2024-07-22 16:56:40.103418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.196 16:56:40 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.196 16:56:40 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:18:39.196 16:56:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:18:39.196 16:56:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:18:39.196 16:56:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:18:39.196 16:56:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:18:39.196 16:56:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:18:39.196 16:56:40 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:39.196 16:56:40 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.196 16:56:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.196 ************************************ 00:18:39.196 START TEST accel_assign_opcode 00:18:39.196 ************************************ 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:39.196 [2024-07-22 16:56:40.592938] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:39.196 [2024-07-22 16:56:40.600909] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.196 16:56:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.128 software 00:18:40.128 ************************************ 00:18:40.128 END TEST accel_assign_opcode 00:18:40.128 ************************************ 00:18:40.128 00:18:40.128 real 0m0.931s 00:18:40.128 user 0m0.059s 00:18:40.128 sys 0m0.006s 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:40.128 16:56:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:18:40.128 16:56:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63555 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63555 ']' 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63555 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63555 00:18:40.128 killing process with pid 63555 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63555' 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@967 -- # kill 63555 00:18:40.128 16:56:41 accel_rpc -- common/autotest_common.sh@972 -- # wait 63555 00:18:42.654 ************************************ 00:18:42.654 END TEST accel_rpc 00:18:42.654 ************************************ 00:18:42.654 00:18:42.654 real 0m4.630s 00:18:42.654 user 0m4.497s 00:18:42.654 sys 0m0.711s 00:18:42.654 16:56:44 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.654 16:56:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.654 16:56:44 -- common/autotest_common.sh@1142 -- # return 0 00:18:42.654 16:56:44 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:42.654 16:56:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:42.654 16:56:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.654 16:56:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.654 ************************************ 00:18:42.654 START TEST app_cmdline 00:18:42.654 ************************************ 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:42.654 * Looking for test storage... 00:18:42.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:42.654 16:56:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:42.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.654 16:56:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63675 00:18:42.654 16:56:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63675 00:18:42.654 16:56:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63675 ']' 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.654 16:56:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:42.912 [2024-07-22 16:56:44.343845] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:42.912 [2024-07-22 16:56:44.344378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63675 ] 00:18:42.912 [2024-07-22 16:56:44.522569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.478 [2024-07-22 16:56:44.792859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.422 16:56:45 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.422 16:56:45 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:44.422 { 00:18:44.422 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:18:44.422 "fields": { 00:18:44.422 "major": 24, 00:18:44.422 "minor": 9, 00:18:44.422 "patch": 0, 00:18:44.422 "suffix": "-pre", 00:18:44.422 "commit": "f7b31b2b9" 00:18:44.422 } 00:18:44.422 } 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:44.422 16:56:45 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.422 16:56:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:44.422 16:56:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:44.422 16:56:45 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.422 16:56:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:44.422 16:56:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:44.422 16:56:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:44.422 16:56:46 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:44.680 request: 00:18:44.680 { 00:18:44.680 "method": "env_dpdk_get_mem_stats", 00:18:44.680 "req_id": 1 00:18:44.680 } 00:18:44.680 Got JSON-RPC error response 00:18:44.680 response: 00:18:44.680 { 00:18:44.680 "code": -32601, 00:18:44.680 "message": "Method not found" 00:18:44.680 } 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.939 16:56:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63675 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63675 ']' 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63675 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63675 00:18:44.939 killing process with pid 63675 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63675' 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@967 -- # kill 63675 00:18:44.939 16:56:46 app_cmdline -- common/autotest_common.sh@972 -- # wait 63675 00:18:47.488 00:18:47.488 real 0m4.710s 00:18:47.488 user 0m4.934s 00:18:47.488 sys 0m0.730s 00:18:47.488 16:56:48 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.488 16:56:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:47.488 ************************************ 00:18:47.488 END TEST app_cmdline 00:18:47.488 ************************************ 00:18:47.488 16:56:48 -- common/autotest_common.sh@1142 -- # return 0 00:18:47.488 16:56:48 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:47.488 16:56:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:47.488 16:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.488 16:56:48 -- common/autotest_common.sh@10 -- # set +x 00:18:47.488 ************************************ 00:18:47.488 START TEST version 00:18:47.488 ************************************ 00:18:47.488 16:56:48 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:47.488 * Looking for test storage... 00:18:47.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:47.488 16:56:48 version -- app/version.sh@17 -- # get_header_version major 00:18:47.488 16:56:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # cut -f2 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # tr -d '"' 00:18:47.488 16:56:48 version -- app/version.sh@17 -- # major=24 00:18:47.488 16:56:48 version -- app/version.sh@18 -- # get_header_version minor 00:18:47.488 16:56:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # cut -f2 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # tr -d '"' 00:18:47.488 16:56:48 version -- app/version.sh@18 -- # minor=9 00:18:47.488 16:56:48 version -- app/version.sh@19 -- # get_header_version patch 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # cut -f2 00:18:47.488 16:56:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # tr -d '"' 00:18:47.488 16:56:48 version -- app/version.sh@19 -- # patch=0 00:18:47.488 16:56:48 version -- app/version.sh@20 -- # get_header_version suffix 00:18:47.488 16:56:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # cut -f2 00:18:47.488 16:56:48 version -- app/version.sh@14 -- # tr -d '"' 00:18:47.488 16:56:48 version -- app/version.sh@20 -- # suffix=-pre 00:18:47.488 16:56:48 version -- app/version.sh@22 -- # version=24.9 00:18:47.488 16:56:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:47.488 16:56:48 version -- app/version.sh@28 -- # version=24.9rc0 00:18:47.488 16:56:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:47.488 16:56:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:47.488 16:56:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:18:47.488 16:56:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:18:47.488 00:18:47.488 real 0m0.150s 00:18:47.488 user 0m0.072s 00:18:47.488 sys 0m0.111s 00:18:47.488 ************************************ 00:18:47.488 END TEST version 00:18:47.488 ************************************ 00:18:47.488 16:56:49 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.488 16:56:49 version -- common/autotest_common.sh@10 -- # set +x 00:18:47.488 16:56:49 -- common/autotest_common.sh@1142 -- # return 0 00:18:47.488 16:56:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:18:47.488 16:56:49 -- spdk/autotest.sh@198 -- # uname -s 00:18:47.488 16:56:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:18:47.488 16:56:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:18:47.488 16:56:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:18:47.488 16:56:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:18:47.488 16:56:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:47.488 16:56:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:47.488 16:56:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.488 16:56:49 -- common/autotest_common.sh@10 -- # set +x 00:18:47.488 16:56:49 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:18:47.488 16:56:49 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:18:47.488 16:56:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:47.488 16:56:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.488 16:56:49 -- common/autotest_common.sh@10 -- # set +x 00:18:47.747 ************************************ 00:18:47.747 START TEST iscsi_tgt 00:18:47.747 ************************************ 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:18:47.747 * Looking for test storage... 00:18:47.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:47.747 Cleaning up iSCSI connection 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:47.747 iscsiadm: No matching sessions found 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:47.747 iscsiadm: No records found 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:18:47.747 Cannot find device "init_br" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:18:47.747 Cannot find device "tgt_br" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:18:47.747 Cannot find device "tgt_br2" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:18:47.747 Cannot find device "init_br" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:18:47.747 Cannot find device "tgt_br" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:18:47.747 Cannot find device "tgt_br2" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:18:47.747 Cannot find device "iscsi_br" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:18:47.747 Cannot find device "spdk_init_int" 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:18:47.747 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:18:47.747 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:18:47.747 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:18:47.747 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:18:48.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:18:48.006 00:18:48.006 --- 10.0.0.1 ping statistics --- 00:18:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.006 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:18:48.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:18:48.006 00:18:48.006 --- 10.0.0.3 ping statistics --- 00:18:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.006 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:18:48.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:18:48.006 00:18:48.006 --- 10.0.0.2 ping statistics --- 00:18:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.006 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:18:48.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:18:48.006 00:18:48.006 --- 10.0.0.2 ping statistics --- 00:18:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.006 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:18:48.006 16:56:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:18:48.006 16:56:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:48.006 16:56:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.006 16:56:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:48.264 ************************************ 00:18:48.264 START TEST iscsi_tgt_sock 00:18:48.264 ************************************ 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:18:48.264 * Looking for test storage... 00:18:48.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:48.264 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:18:48.265 Testing client path 00:18:48.265 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=64022 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 64022 10.0.0.2:3260 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:18:48.265 16:56:49 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:18:48.832 [2024-07-22 16:56:50.281538] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:48.832 [2024-07-22 16:56:50.281791] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64026 ] 00:18:49.090 [2024-07-22 16:56:50.485044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.349 [2024-07-22 16:56:50.801884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.349 [2024-07-22 16:56:50.802000] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:49.349 [2024-07-22 16:56:50.802047] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:18:49.349 [2024-07-22 16:56:50.802259] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 37512) 00:18:49.349 [2024-07-22 16:56:50.802413] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:18:50.287 [2024-07-22 16:56:51.802449] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:18:50.287 [2024-07-22 16:56:51.802764] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:18:50.861 [2024-07-22 16:56:52.291521] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:50.861 [2024-07-22 16:56:52.291700] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64057 ] 00:18:50.861 [2024-07-22 16:56:52.472180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.428 [2024-07-22 16:56:52.750846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.428 [2024-07-22 16:56:52.750941] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:51.428 [2024-07-22 16:56:52.750998] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:18:51.428 [2024-07-22 16:56:52.751206] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 54250) 00:18:51.428 [2024-07-22 16:56:52.751339] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:18:52.361 [2024-07-22 16:56:53.751373] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:18:52.361 [2024-07-22 16:56:53.751575] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:18:52.926 [2024-07-22 16:56:54.235321] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:52.926 [2024-07-22 16:56:54.235502] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64088 ] 00:18:52.926 [2024-07-22 16:56:54.410987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.183 [2024-07-22 16:56:54.679082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.183 [2024-07-22 16:56:54.679197] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:53.183 [2024-07-22 16:56:54.679242] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:18:53.183 [2024-07-22 16:56:54.679609] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 54254) 00:18:53.183 [2024-07-22 16:56:54.679720] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:18:54.115 [2024-07-22 16:56:55.679759] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:18:54.115 [2024-07-22 16:56:55.680031] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:18:54.681 killing process with pid 64022 00:18:54.681 Testing SSL server path 00:18:54.681 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:18:54.681 [2024-07-22 16:56:56.233505] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:54.681 [2024-07-22 16:56:56.233953] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64141 ] 00:18:54.938 [2024-07-22 16:56:56.399565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.196 [2024-07-22 16:56:56.660299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.196 [2024-07-22 16:56:56.660729] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:55.196 [2024-07-22 16:56:56.660968] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:18:55.196 [2024-07-22 16:56:56.748228] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:55.196 [2024-07-22 16:56:56.748688] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64146 ] 00:18:55.454 [2024-07-22 16:56:56.926801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.713 [2024-07-22 16:56:57.235010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.713 [2024-07-22 16:56:57.235394] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:55.713 [2024-07-22 16:56:57.235563] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:18:55.713 [2024-07-22 16:56:57.241787] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 59218) 00:18:55.713 [2024-07-22 16:56:57.242533] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 59218) to (10.0.0.1, 3260) 00:18:55.713 [2024-07-22 16:56:57.245788] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:18:56.647 [2024-07-22 16:56:58.246007] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:18:56.647 [2024-07-22 16:56:58.246542] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:18:56.647 [2024-07-22 16:56:58.246754] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:18:57.214 [2024-07-22 16:56:58.764859] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:57.214 [2024-07-22 16:56:58.765923] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64175 ] 00:18:57.472 [2024-07-22 16:56:58.940380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.730 [2024-07-22 16:56:59.227034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.730 [2024-07-22 16:56:59.227325] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:57.730 [2024-07-22 16:56:59.227528] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:18:57.730 [2024-07-22 16:56:59.229198] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 59222) to (10.0.0.1, 3260) 00:18:57.730 [2024-07-22 16:56:59.233559] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 59222) 00:18:57.730 [2024-07-22 16:56:59.236765] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:18:58.664 [2024-07-22 16:57:00.236938] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:18:58.665 [2024-07-22 16:57:00.237406] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:18:58.665 [2024-07-22 16:57:00.237590] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:18:59.230 [2024-07-22 16:57:00.744791] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:59.230 [2024-07-22 16:57:00.744945] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64203 ] 00:18:59.488 [2024-07-22 16:57:00.908004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.751 [2024-07-22 16:57:01.184392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.751 [2024-07-22 16:57:01.184742] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:18:59.751 [2024-07-22 16:57:01.184901] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:18:59.751 [2024-07-22 16:57:01.186829] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 59228) to (10.0.0.1, 3260) 00:18:59.751 [2024-07-22 16:57:01.190829] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:18:59.751 [2024-07-22 16:57:01.191064] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:18:59.751 [2024-07-22 16:57:01.191224] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:18:59.751 [2024-07-22 16:57:01.191317] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:59.751 [2024-07-22 16:57:01.191400] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:18:59.751 [2024-07-22 16:57:01.191591] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:18:59.751 [2024-07-22 16:57:01.191653] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:00.331 [2024-07-22 16:57:01.685568] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:00.331 [2024-07-22 16:57:01.686003] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64220 ] 00:19:00.331 [2024-07-22 16:57:01.859311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.590 [2024-07-22 16:57:02.187045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.590 [2024-07-22 16:57:02.187394] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:19:00.590 [2024-07-22 16:57:02.187450] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:19:00.590 [2024-07-22 16:57:02.189417] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 59230) to (10.0.0.1, 3260) 00:19:00.590 [2024-07-22 16:57:02.193747] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 59230) 00:19:00.590 [2024-07-22 16:57:02.196920] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:19:01.965 [2024-07-22 16:57:03.197096] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:19:01.965 [2024-07-22 16:57:03.197544] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:19:01.965 [2024-07-22 16:57:03.197734] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:02.224 SSL_connect:before SSL initialization 00:19:02.224 SSL_connect:SSLv3/TLS write client hello 00:19:02.224 [2024-07-22 16:57:03.723906] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 34162) to (10.0.0.1, 3260) 00:19:02.224 SSL_connect:SSLv3/TLS write client hello 00:19:02.224 SSL_connect:SSLv3/TLS read server hello 00:19:02.224 Can't use SSL_get_servername 00:19:02.224 SSL_connect:TLSv1.3 read encrypted extensions 00:19:02.224 SSL_connect:SSLv3/TLS read finished 00:19:02.224 SSL_connect:SSLv3/TLS write change cipher spec 00:19:02.224 SSL_connect:SSLv3/TLS write finished 00:19:02.224 SSL_connect:SSL negotiation finished successfully 00:19:02.224 SSL_connect:SSL negotiation finished successfully 00:19:02.224 SSL_connect:SSLv3/TLS read server session ticket 00:19:04.127 DONE 00:19:04.127 SSL3 alert write:warning:close notify 00:19:04.127 [2024-07-22 16:57:05.648914] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:19:04.127 [2024-07-22 16:57:05.715879] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:04.127 [2024-07-22 16:57:05.716090] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64275 ] 00:19:04.385 [2024-07-22 16:57:05.903529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.643 [2024-07-22 16:57:06.195304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.643 [2024-07-22 16:57:06.195657] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:19:04.643 [2024-07-22 16:57:06.195814] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:19:04.643 [2024-07-22 16:57:06.197571] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 44892) to (10.0.0.1, 3260) 00:19:04.643 [2024-07-22 16:57:06.201809] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 44892) 00:19:04.643 [2024-07-22 16:57:06.203449] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:19:04.643 [2024-07-22 16:57:06.203458] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:06.016 [2024-07-22 16:57:07.203448] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:19:06.016 [2024-07-22 16:57:07.204382] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:06.016 [2024-07-22 16:57:07.204592] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:19:06.016 [2024-07-22 16:57:07.204726] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:06.274 [2024-07-22 16:57:07.696898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:06.274 [2024-07-22 16:57:07.697381] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64297 ] 00:19:06.274 [2024-07-22 16:57:07.873359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.838 [2024-07-22 16:57:08.149074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.838 [2024-07-22 16:57:08.149469] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:19:06.838 [2024-07-22 16:57:08.149628] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:19:06.838 [2024-07-22 16:57:08.151614] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 44894) to (10.0.0.1, 3260) 00:19:06.838 [2024-07-22 16:57:08.155814] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 44894) 00:19:06.838 [2024-07-22 16:57:08.157041] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:19:06.838 [2024-07-22 16:57:08.157149] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:19:06.838 [2024-07-22 16:57:08.157175] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:07.771 [2024-07-22 16:57:09.157164] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:19:07.771 [2024-07-22 16:57:09.157556] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.771 [2024-07-22 16:57:09.157643] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:19:07.771 [2024-07-22 16:57:09.157660] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:08.029 killing process with pid 64141 00:19:09.402 [2024-07-22 16:57:10.616535] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:19:09.402 [2024-07-22 16:57:10.616802] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:09.660 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:19:09.660 [2024-07-22 16:57:11.140924] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:09.660 [2024-07-22 16:57:11.141125] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64366 ] 00:19:09.918 [2024-07-22 16:57:11.318762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.176 [2024-07-22 16:57:11.580397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.176 [2024-07-22 16:57:11.580521] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:19:10.176 [2024-07-22 16:57:11.580647] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:19:10.176 [2024-07-22 16:57:11.605939] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 34164) to (10.0.0.1, 3260) 00:19:10.176 [2024-07-22 16:57:11.606108] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:19:10.176 killing process with pid 64366 00:19:11.110 [2024-07-22 16:57:12.632942] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:19:11.110 [2024-07-22 16:57:12.633220] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:19:11.676 ************************************ 00:19:11.676 END TEST iscsi_tgt_sock 00:19:11.676 ************************************ 00:19:11.676 00:19:11.676 real 0m23.484s 00:19:11.676 user 0m29.765s 00:19:11.676 sys 0m2.953s 00:19:11.676 16:57:13 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.676 16:57:13 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:19:11.676 16:57:13 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:11.676 16:57:13 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:19:11.676 16:57:13 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:19:11.676 16:57:13 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:11.676 16:57:13 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.676 16:57:13 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:11.676 ************************************ 00:19:11.676 START TEST iscsi_tgt_calsoft 00:19:11.676 ************************************ 00:19:11.676 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:19:11.676 * Looking for test storage... 00:19:11.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:19:11.677 Process pid: 64460 00:19:11.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=64460 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 64460' 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 64460 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 64460 ']' 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.677 16:57:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:19:11.935 [2024-07-22 16:57:13.386954] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:11.935 [2024-07-22 16:57:13.387147] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64460 ] 00:19:12.194 [2024-07-22 16:57:13.552802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.453 [2024-07-22 16:57:13.858133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.711 16:57:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.711 16:57:14 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:19:12.711 16:57:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:13.277 16:57:14 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:14.243 16:57:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:14.243 iscsi_tgt is listening. Running tests... 00:19:14.244 16:57:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:19:14.244 16:57:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.244 16:57:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:19:14.244 16:57:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:19:14.501 16:57:16 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:19:14.759 16:57:16 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:19:15.017 16:57:16 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:15.275 16:57:16 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:19:15.533 MyBdev 00:19:15.792 16:57:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:19:16.051 16:57:17 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:19:16.985 16:57:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:19:16.985 16:57:18 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:19:16.985 [2024-07-22 16:57:18.500552] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:19:16.985 [2024-07-22 16:57:18.540129] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:16.985 [2024-07-22 16:57:18.587518] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:19:16.985 [2024-07-22 16:57:18.587710] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:16.985 [2024-07-22 16:57:18.587988] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:19:16.985 [2024-07-22 16:57:18.588136] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:19:16.985 [2024-07-22 16:57:18.589097] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:19:17.242 [2024-07-22 16:57:18.625393] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:17.243 [2024-07-22 16:57:18.641814] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:17.243 [2024-07-22 16:57:18.661823] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:17.243 [2024-07-22 16:57:18.680648] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:17.243 [2024-07-22 16:57:18.680812] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.243 [2024-07-22 16:57:18.717457] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:17.243 [2024-07-22 16:57:18.737216] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:19:17.243 [2024-07-22 16:57:18.755854] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:17.243 [2024-07-22 16:57:18.824709] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.243 [2024-07-22 16:57:18.840003] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:19:17.500 [2024-07-22 16:57:18.856242] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:19:17.500 [2024-07-22 16:57:18.875332] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:19:17.500 [2024-07-22 16:57:18.952679] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:17.500 [2024-07-22 16:57:18.953028] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.501 [2024-07-22 16:57:18.972485] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:17.501 [2024-07-22 16:57:18.992268] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:19:17.501 [2024-07-22 16:57:18.992432] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:17.501 [2024-07-22 16:57:19.011614] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:19:17.501 [2024-07-22 16:57:19.011763] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:19:17.501 [2024-07-22 16:57:19.046509] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:17.501 [2024-07-22 16:57:19.046671] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.501 [2024-07-22 16:57:19.084555] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:17.501 [2024-07-22 16:57:19.084750] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.501 [2024-07-22 16:57:19.104976] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:17.501 [2024-07-22 16:57:19.105137] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.759 [2024-07-22 16:57:19.135948] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:19:17.759 [2024-07-22 16:57:19.136009] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:19:17.759 [2024-07-22 16:57:19.154917] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:19:17.759 [2024-07-22 16:57:19.154965] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:19:17.759 [2024-07-22 16:57:19.154983] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:19:17.759 [2024-07-22 16:57:19.173079] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:17.759 [2024-07-22 16:57:19.173227] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.759 [2024-07-22 16:57:19.250920] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:17.759 [2024-07-22 16:57:19.251195] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:17.759 [2024-07-22 16:57:19.269312] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:19:17.759 [2024-07-22 16:57:19.269528] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:19:17.759 [2024-07-22 16:57:19.269719] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:19:17.759 [2024-07-22 16:57:19.269830] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:19:18.069 [2024-07-22 16:57:19.601316] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:18.069 [2024-07-22 16:57:19.621980] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.069 [2024-07-22 16:57:19.622159] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.346 [2024-07-22 16:57:19.692604] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.346 [2024-07-22 16:57:19.692773] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.346 [2024-07-22 16:57:19.738486] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:19:18.346 PDU 00:19:18.346 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:19:18.346 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:19:18.346 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:19:18.346 [2024-07-22 16:57:19.738614] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:19:18.346 [2024-07-22 16:57:19.758662] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.346 [2024-07-22 16:57:19.758817] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.346 [2024-07-22 16:57:19.831287] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:19:18.346 [2024-07-22 16:57:19.863800] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:19:18.346 PDU 00:19:18.346 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:19:18.346 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:19:18.346 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:19:18.346 [2024-07-22 16:57:19.863945] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:19:18.346 [2024-07-22 16:57:19.942611] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:19:18.604 [2024-07-22 16:57:19.962611] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.604 [2024-07-22 16:57:19.962960] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.604 [2024-07-22 16:57:19.981911] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.604 [2024-07-22 16:57:19.982130] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.604 [2024-07-22 16:57:20.035434] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.604 [2024-07-22 16:57:20.035780] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.604 [2024-07-22 16:57:20.069410] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.605 [2024-07-22 16:57:20.069571] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.605 [2024-07-22 16:57:20.105325] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:18.605 [2024-07-22 16:57:20.124014] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.605 [2024-07-22 16:57:20.124174] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.605 [2024-07-22 16:57:20.143825] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.605 [2024-07-22 16:57:20.144163] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.605 [2024-07-22 16:57:20.180573] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:18.605 [2024-07-22 16:57:20.199811] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:18.862 [2024-07-22 16:57:20.252483] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:18.862 [2024-07-22 16:57:20.269983] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.862 [2024-07-22 16:57:20.270142] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.862 [2024-07-22 16:57:20.288609] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:19:18.862 [2024-07-22 16:57:20.306937] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:19:18.862 [2024-07-22 16:57:20.342765] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:18.862 [2024-07-22 16:57:20.356018] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:19:18.862 [2024-07-22 16:57:20.406238] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:18.862 [2024-07-22 16:57:20.406396] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:18.862 [2024-07-22 16:57:20.455059] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:18.862 [2024-07-22 16:57:20.455225] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:19.120 [2024-07-22 16:57:20.491817] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:19.120 [2024-07-22 16:57:20.511649] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:19.120 [2024-07-22 16:57:20.531701] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:19:19.120 [2024-07-22 16:57:20.549093] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:19:19.120 [2024-07-22 16:57:20.567880] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:19:19.120 [2024-07-22 16:57:20.568233] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:19.120 [2024-07-22 16:57:20.606758] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:19:19.120 [2024-07-22 16:57:20.627510] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:19.120 [2024-07-22 16:57:20.627805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:19.120 [2024-07-22 16:57:20.648690] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:19.120 [2024-07-22 16:57:20.660765] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:19:19.121 [2024-07-22 16:57:20.695938] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:19.121 [2024-07-22 16:57:20.696104] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:19:19.121 [2024-07-22 16:57:20.696379] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:19:19.378 [2024-07-22 16:57:20.755482] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:21.282 [2024-07-22 16:57:22.715709] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.282 [2024-07-22 16:57:22.736709] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:21.282 [2024-07-22 16:57:22.737006] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.282 [2024-07-22 16:57:22.756319] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:21.282 [2024-07-22 16:57:22.850706] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:21.282 [2024-07-22 16:57:22.851123] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.282 [2024-07-22 16:57:22.888645] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:19:21.282 [2024-07-22 16:57:22.888813] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:19:21.282 [2024-07-22 16:57:22.889554] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:19:21.541 [2024-07-22 16:57:22.944986] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:21.541 [2024-07-22 16:57:22.965676] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:21.541 [2024-07-22 16:57:22.985612] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:21.541 [2024-07-22 16:57:22.985677] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:19:21.541 [2024-07-22 16:57:22.985699] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:19:21.541 [2024-07-22 16:57:22.985714] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:19:21.541 [2024-07-22 16:57:23.005569] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:21.541 [2024-07-22 16:57:23.145594] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:21.800 [2024-07-22 16:57:23.219727] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:21.800 [2024-07-22 16:57:23.219897] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.800 [2024-07-22 16:57:23.241698] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:21.800 [2024-07-22 16:57:23.241877] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.800 [2024-07-22 16:57:23.258369] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:19:21.800 [2024-07-22 16:57:23.277605] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:19:21.800 [2024-07-22 16:57:23.315756] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:21.800 [2024-07-22 16:57:23.315911] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.800 [2024-07-22 16:57:23.351600] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:21.800 [2024-07-22 16:57:23.372928] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:21.800 [2024-07-22 16:57:23.373123] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:21.800 [2024-07-22 16:57:23.411142] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:22.059 [2024-07-22 16:57:23.429461] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:22.059 [2024-07-22 16:57:23.429724] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.450360] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:22.059 [2024-07-22 16:57:23.450532] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.472057] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:22.059 [2024-07-22 16:57:23.493023] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:22.059 [2024-07-22 16:57:23.493199] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.545980] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:22.059 [2024-07-22 16:57:23.546191] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.590568] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:22.059 [2024-07-22 16:57:23.590743] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.628964] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:19:22.059 [2024-07-22 16:57:23.652107] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:22.059 [2024-07-22 16:57:23.652290] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.059 [2024-07-22 16:57:23.670893] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:22.059 [2024-07-22 16:57:23.671044] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.318 [2024-07-22 16:57:23.705251] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:19:22.318 [2024-07-22 16:57:23.705406] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:19:22.318 [2024-07-22 16:57:23.705512] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:19:22.318 [2024-07-22 16:57:23.725798] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:22.318 [2024-07-22 16:57:23.744654] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:22.318 [2024-07-22 16:57:23.864802] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:22.318 [2024-07-22 16:57:23.923116] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:19:23.694 [2024-07-22 16:57:24.995233] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:19:24.631 [2024-07-22 16:57:25.978696] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:19:24.631 [2024-07-22 16:57:25.979198] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:19:24.631 [2024-07-22 16:57:25.995483] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:19:25.564 [2024-07-22 16:57:26.995775] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:19:25.564 [2024-07-22 16:57:26.996041] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:19:25.564 [2024-07-22 16:57:26.996068] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:19:25.564 [2024-07-22 16:57:26.996094] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:19:37.766 [2024-07-22 16:57:39.044562] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:19:37.766 [2024-07-22 16:57:39.066531] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:19:37.766 [2024-07-22 16:57:39.085654] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:19:37.766 [2024-07-22 16:57:39.087516] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:19:37.766 [2024-07-22 16:57:39.107695] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:19:37.766 [2024-07-22 16:57:39.128677] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:19:37.766 [2024-07-22 16:57:39.149249] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:19:37.766 [2024-07-22 16:57:39.189619] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:19:37.766 [2024-07-22 16:57:39.193408] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:19:37.766 [2024-07-22 16:57:39.212865] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:19:37.767 [2024-07-22 16:57:39.234661] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:19:37.767 [2024-07-22 16:57:39.253850] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:19:37.767 Skipping tc_ffp_15_2. It is known to fail. 00:19:37.767 Skipping tc_ffp_29_2. It is known to fail. 00:19:37.767 Skipping tc_ffp_29_3. It is known to fail. 00:19:37.767 Skipping tc_ffp_29_4. It is known to fail. 00:19:37.767 Skipping tc_err_1_1. It is known to fail. 00:19:37.767 Skipping tc_err_1_2. It is known to fail. 00:19:37.767 Skipping tc_err_2_8. It is known to fail. 00:19:37.767 Skipping tc_err_3_1. It is known to fail. 00:19:37.767 Skipping tc_err_3_2. It is known to fail. 00:19:37.767 Skipping tc_err_3_3. It is known to fail. 00:19:37.767 Skipping tc_err_3_4. It is known to fail. 00:19:37.767 Skipping tc_err_5_1. It is known to fail. 00:19:37.767 Skipping tc_login_3_1. It is known to fail. 00:19:37.767 Skipping tc_login_11_2. It is known to fail. 00:19:37.767 Skipping tc_login_11_4. It is known to fail. 00:19:37.767 Skipping tc_login_2_2. It is known to fail. 00:19:37.767 Skipping tc_login_29_1. It is known to fail. 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:19:37.767 Cleaning up iSCSI connection 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:37.767 iscsiadm: No matching sessions found 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:37.767 iscsiadm: No records found 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 64460 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 64460 ']' 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 64460 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64460 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.767 killing process with pid 64460 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64460' 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 64460 00:19:37.767 16:57:39 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 64460 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:19:41.062 00:19:41.062 real 0m28.823s 00:19:41.062 user 0m44.770s 00:19:41.062 sys 0m2.778s 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.062 16:57:41 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:19:41.062 ************************************ 00:19:41.062 END TEST iscsi_tgt_calsoft 00:19:41.062 ************************************ 00:19:41.062 16:57:42 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:41.062 16:57:42 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:19:41.062 16:57:42 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.062 16:57:42 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.062 16:57:42 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:41.062 ************************************ 00:19:41.062 START TEST iscsi_tgt_filesystem 00:19:41.062 ************************************ 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:19:41.062 * Looking for test storage... 00:19:41.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:41.062 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:41.063 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:41.063 #define SPDK_CONFIG_H 00:19:41.063 #define SPDK_CONFIG_APPS 1 00:19:41.063 #define SPDK_CONFIG_ARCH native 00:19:41.063 #define SPDK_CONFIG_ASAN 1 00:19:41.063 #undef SPDK_CONFIG_AVAHI 00:19:41.063 #undef SPDK_CONFIG_CET 00:19:41.063 #define SPDK_CONFIG_COVERAGE 1 00:19:41.063 #define SPDK_CONFIG_CROSS_PREFIX 00:19:41.063 #undef SPDK_CONFIG_CRYPTO 00:19:41.063 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:41.063 #undef SPDK_CONFIG_CUSTOMOCF 00:19:41.063 #undef SPDK_CONFIG_DAOS 00:19:41.063 #define SPDK_CONFIG_DAOS_DIR 00:19:41.063 #define SPDK_CONFIG_DEBUG 1 00:19:41.063 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:41.063 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:41.063 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:41.063 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:41.063 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:41.063 #undef SPDK_CONFIG_DPDK_UADK 00:19:41.063 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:41.063 #define SPDK_CONFIG_EXAMPLES 1 00:19:41.063 #undef SPDK_CONFIG_FC 00:19:41.063 #define SPDK_CONFIG_FC_PATH 00:19:41.063 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:41.063 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:41.063 #undef SPDK_CONFIG_FUSE 00:19:41.063 #undef SPDK_CONFIG_FUZZER 00:19:41.063 #define SPDK_CONFIG_FUZZER_LIB 00:19:41.063 #undef SPDK_CONFIG_GOLANG 00:19:41.063 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:41.063 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:41.063 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:41.063 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:41.063 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:41.063 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:41.063 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:41.063 #define SPDK_CONFIG_IDXD 1 00:19:41.063 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:41.063 #undef SPDK_CONFIG_IPSEC_MB 00:19:41.063 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:41.063 #define SPDK_CONFIG_ISAL 1 00:19:41.063 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:41.063 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:41.063 #define SPDK_CONFIG_LIBDIR 00:19:41.063 #undef SPDK_CONFIG_LTO 00:19:41.063 #define SPDK_CONFIG_MAX_LCORES 128 00:19:41.063 #define SPDK_CONFIG_NVME_CUSE 1 00:19:41.063 #undef SPDK_CONFIG_OCF 00:19:41.063 #define SPDK_CONFIG_OCF_PATH 00:19:41.063 #define SPDK_CONFIG_OPENSSL_PATH 00:19:41.063 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:41.063 #define SPDK_CONFIG_PGO_DIR 00:19:41.063 #undef SPDK_CONFIG_PGO_USE 00:19:41.063 #define SPDK_CONFIG_PREFIX /usr/local 00:19:41.063 #undef SPDK_CONFIG_RAID5F 00:19:41.063 #define SPDK_CONFIG_RBD 1 00:19:41.063 #define SPDK_CONFIG_RDMA 1 00:19:41.063 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:41.063 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:41.063 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:41.063 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:41.063 #define SPDK_CONFIG_SHARED 1 00:19:41.064 #undef SPDK_CONFIG_SMA 00:19:41.064 #define SPDK_CONFIG_TESTS 1 00:19:41.064 #undef SPDK_CONFIG_TSAN 00:19:41.064 #define SPDK_CONFIG_UBLK 1 00:19:41.064 #define SPDK_CONFIG_UBSAN 1 00:19:41.064 #undef SPDK_CONFIG_UNIT_TESTS 00:19:41.064 #undef SPDK_CONFIG_URING 00:19:41.064 #define SPDK_CONFIG_URING_PATH 00:19:41.064 #undef SPDK_CONFIG_URING_ZNS 00:19:41.064 #undef SPDK_CONFIG_USDT 00:19:41.064 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:41.064 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:41.064 #undef SPDK_CONFIG_VFIO_USER 00:19:41.064 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:41.064 #define SPDK_CONFIG_VHOST 1 00:19:41.064 #define SPDK_CONFIG_VIRTIO 1 00:19:41.064 #undef SPDK_CONFIG_VTUNE 00:19:41.064 #define SPDK_CONFIG_VTUNE_DIR 00:19:41.064 #define SPDK_CONFIG_WERROR 1 00:19:41.064 #define SPDK_CONFIG_WPDK_DIR 00:19:41.064 #undef SPDK_CONFIG_XNVME 00:19:41.064 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:19:41.064 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 1 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:41.065 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65202 ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 65202 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.VW5xDv 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.VW5xDv/tests/filesystem /tmp/spdk.VW5xDv 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6263181312 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788938240 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5239898112 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788938240 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5239898112 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93485088768 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6217691136 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:19:41.066 * Looking for test storage... 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13788938240 00:19:41.066 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=65239 00:19:41.067 Process pid: 65239 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 65239' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 65239 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 65239 ']' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.067 16:57:42 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:41.067 [2024-07-22 16:57:42.404233] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:41.067 [2024-07-22 16:57:42.404490] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65239 ] 00:19:41.067 [2024-07-22 16:57:42.578092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.326 [2024-07-22 16:57:42.873195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.326 [2024-07-22 16:57:42.873352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.326 [2024-07-22 16:57:42.873488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.326 [2024-07-22 16:57:42.873505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.891 16:57:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 iscsi_tgt is listening. Running tests... 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 Nvme0n1 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=e6867b27-c161-4233-b7ee-609b56d1b67e 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb e6867b27-c161-4233-b7ee-609b56d1b67e 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=e6867b27-c161-4233-b7ee-609b56d1b67e 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.826 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:42.826 { 00:19:42.826 "uuid": "e6867b27-c161-4233-b7ee-609b56d1b67e", 00:19:42.826 "name": "lvs_0", 00:19:42.826 "base_bdev": "Nvme0n1", 00:19:42.827 "total_data_clusters": 1278, 00:19:42.827 "free_clusters": 1278, 00:19:42.827 "block_size": 4096, 00:19:42.827 "cluster_size": 4194304 00:19:42.827 } 00:19:42.827 ]' 00:19:42.827 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e6867b27-c161-4233-b7ee-609b56d1b67e") .free_clusters' 00:19:42.827 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:19:42.827 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e6867b27-c161-4233-b7ee-609b56d1b67e") .cluster_size' 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u e6867b27-c161-4233-b7ee-609b56d1b67e lbd_0 2048 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:43.085 7975a657-51aa-4a3e-82b5-4cd00aa60234 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.085 16:57:44 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:44.020 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:44.020 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:44.020 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:44.020 [2024-07-22 16:57:45.594926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:44.020 { 00:19:44.020 "name": "7975a657-51aa-4a3e-82b5-4cd00aa60234", 00:19:44.020 "aliases": [ 00:19:44.020 "lvs_0/lbd_0" 00:19:44.020 ], 00:19:44.020 "product_name": "Logical Volume", 00:19:44.020 "block_size": 4096, 00:19:44.020 "num_blocks": 524288, 00:19:44.020 "uuid": "7975a657-51aa-4a3e-82b5-4cd00aa60234", 00:19:44.020 "assigned_rate_limits": { 00:19:44.020 "rw_ios_per_sec": 0, 00:19:44.020 "rw_mbytes_per_sec": 0, 00:19:44.020 "r_mbytes_per_sec": 0, 00:19:44.020 "w_mbytes_per_sec": 0 00:19:44.020 }, 00:19:44.020 "claimed": false, 00:19:44.020 "zoned": false, 00:19:44.020 "supported_io_types": { 00:19:44.020 "read": true, 00:19:44.020 "write": true, 00:19:44.020 "unmap": true, 00:19:44.020 "flush": false, 00:19:44.020 "reset": true, 00:19:44.020 "nvme_admin": false, 00:19:44.020 "nvme_io": false, 00:19:44.020 "nvme_io_md": false, 00:19:44.020 "write_zeroes": true, 00:19:44.020 "zcopy": false, 00:19:44.020 "get_zone_info": false, 00:19:44.020 "zone_management": false, 00:19:44.020 "zone_append": false, 00:19:44.020 "compare": false, 00:19:44.020 "compare_and_write": false, 00:19:44.020 "abort": false, 00:19:44.020 "seek_hole": true, 00:19:44.020 "seek_data": true, 00:19:44.020 "copy": false, 00:19:44.020 "nvme_iov_md": false 00:19:44.020 }, 00:19:44.020 "driver_specific": { 00:19:44.020 "lvol": { 00:19:44.020 "lvol_store_uuid": "e6867b27-c161-4233-b7ee-609b56d1b67e", 00:19:44.020 "base_bdev": "Nvme0n1", 00:19:44.020 "thin_provision": false, 00:19:44.020 "num_allocated_clusters": 512, 00:19:44.020 "snapshot": false, 00:19:44.020 "clone": false, 00:19:44.020 "esnap_clone": false 00:19:44.020 } 00:19:44.020 } 00:19:44.020 } 00:19:44.020 ]' 00:19:44.020 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:19:44.278 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:19:44.279 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:44.279 16:57:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:19:44.279 [2024-07-22 16:57:45.770827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:45.211 ************************************ 00:19:45.211 START TEST iscsi_tgt_filesystem_ext4 00:19:45.211 ************************************ 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:19:45.211 16:57:46 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:19:45.211 mke2fs 1.46.5 (30-Dec-2021) 00:19:45.470 Discarding device blocks: 0/522240 done 00:19:45.470 Creating filesystem with 522240 4k blocks and 130560 inodes 00:19:45.470 Filesystem UUID: 63f3f7ed-417d-482d-a3c5-03d65c336a2a 00:19:45.470 Superblock backups stored on blocks: 00:19:45.470 32768, 98304, 163840, 229376, 294912 00:19:45.470 00:19:45.470 Allocating group tables: 0/16 done 00:19:45.470 Writing inode tables: 0/16 done 00:19:45.470 Creating journal (8192 blocks): done 00:19:45.728 Writing superblocks and filesystem accounting information: 0/16 done 00:19:45.728 00:19:45.728 16:57:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:19:45.728 16:57:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:19:45.728 16:57:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:19:45.728 16:57:47 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:19:45.728 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:19:45.728 fio-3.35 00:19:45.728 Starting 1 thread 00:19:45.728 job0: Laying out IO file (1 file / 1024MiB) 00:20:07.653 00:20:07.653 job0: (groupid=0, jobs=1): err= 0: pid=65405: Mon Jul 22 16:58:05 2024 00:20:07.653 write: IOPS=14.5k, BW=56.6MiB/s (59.4MB/s)(1024MiB/18087msec); 0 zone resets 00:20:07.653 slat (usec): min=5, max=41355, avg=22.67, stdev=197.10 00:20:07.653 clat (usec): min=369, max=62230, avg=4390.90, stdev=2439.72 00:20:07.653 lat (usec): min=428, max=62250, avg=4413.57, stdev=2451.52 00:20:07.653 clat percentiles (usec): 00:20:07.653 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2769], 20.00th=[ 3097], 00:20:07.653 | 30.00th=[ 3654], 40.00th=[ 4015], 50.00th=[ 4293], 60.00th=[ 4555], 00:20:07.653 | 70.00th=[ 4817], 80.00th=[ 5145], 90.00th=[ 5735], 95.00th=[ 6259], 00:20:07.653 | 99.00th=[ 7504], 99.50th=[15139], 99.90th=[43779], 99.95th=[47973], 00:20:07.653 | 99.99th=[60031] 00:20:07.653 bw ( KiB/s): min=41904, max=63248, per=100.00%, avg=57977.33, stdev=5046.65, samples=36 00:20:07.653 iops : min=10476, max=15812, avg=14494.28, stdev=1261.66, samples=36 00:20:07.653 lat (usec) : 500=0.01% 00:20:07.653 lat (msec) : 2=0.16%, 4=38.97%, 10=60.25%, 20=0.14%, 50=0.43% 00:20:07.653 lat (msec) : 100=0.05% 00:20:07.653 cpu : usr=5.41%, sys=20.29%, ctx=23383, majf=0, minf=1 00:20:07.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:07.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:07.653 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:07.653 00:20:07.653 Run status group 0 (all jobs): 00:20:07.653 WRITE: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=1024MiB (1074MB), run=18087-18087msec 00:20:07.653 00:20:07.653 Disk stats (read/write): 00:20:07.653 sda: ios=0/258021, merge=0/2499, ticks=0/1026045, in_queue=1026045, util=99.47% 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:20:07.653 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:07.653 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:07.653 iscsiadm: No active sessions. 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:07.653 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:07.653 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:07.653 [2024-07-22 16:58:05.593480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:20:07.653 File existed. 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:20:07.653 16:58:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:20:07.653 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:20:07.653 fio-3.35 00:20:07.653 Starting 1 thread 00:20:25.808 00:20:25.808 job0: (groupid=0, jobs=1): err= 0: pid=65728: Mon Jul 22 16:58:25 2024 00:20:25.808 read: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(1194MiB/20003msec) 00:20:25.808 slat (usec): min=2, max=4927, avg=10.33, stdev=57.65 00:20:25.808 clat (usec): min=624, max=34594, avg=4172.89, stdev=1368.91 00:20:25.808 lat (usec): min=680, max=36323, avg=4183.22, stdev=1378.99 00:20:25.808 clat percentiles (usec): 00:20:25.808 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 3032], 00:20:25.808 | 30.00th=[ 3425], 40.00th=[ 3818], 50.00th=[ 4080], 60.00th=[ 4359], 00:20:25.808 | 70.00th=[ 4752], 80.00th=[ 5080], 90.00th=[ 5604], 95.00th=[ 5932], 00:20:25.808 | 99.00th=[ 6980], 99.50th=[ 8455], 99.90th=[17957], 99.95th=[26608], 00:20:25.808 | 99.99th=[31589] 00:20:25.808 bw ( KiB/s): min=26872, max=66752, per=100.00%, avg=61198.15, stdev=5896.38, samples=39 00:20:25.808 iops : min= 6718, max=16688, avg=15299.54, stdev=1474.09, samples=39 00:20:25.808 lat (usec) : 750=0.01%, 1000=0.01% 00:20:25.808 lat (msec) : 2=0.26%, 4=44.43%, 10=54.97%, 20=0.25%, 50=0.07% 00:20:25.808 cpu : usr=5.49%, sys=12.80%, ctx=27633, majf=0, minf=65 00:20:25.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:25.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:25.809 issued rwts: total=305633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:25.809 00:20:25.809 Run status group 0 (all jobs): 00:20:25.809 READ: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=1194MiB (1252MB), run=20003-20003msec 00:20:25.809 00:20:25.809 Disk stats (read/write): 00:20:25.809 sda: ios=303227/5, merge=1380/2, ticks=1195840/7, in_queue=1195847, util=99.60% 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:20:25.809 ************************************ 00:20:25.809 END TEST iscsi_tgt_filesystem_ext4 00:20:25.809 ************************************ 00:20:25.809 00:20:25.809 real 0m39.157s 00:20:25.809 user 0m2.330s 00:20:25.809 sys 0m6.477s 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:25.809 ************************************ 00:20:25.809 START TEST iscsi_tgt_filesystem_btrfs 00:20:25.809 ************************************ 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:20:25.809 16:58:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:20:25.809 btrfs-progs v6.6.2 00:20:25.809 See https://btrfs.readthedocs.io for more information. 00:20:25.809 00:20:25.809 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:20:25.809 NOTE: several default settings have changed in version 5.15, please make sure 00:20:25.809 this does not affect your deployments: 00:20:25.809 - DUP for metadata (-m dup) 00:20:25.809 - enabled no-holes (-O no-holes) 00:20:25.809 - enabled free-space-tree (-R free-space-tree) 00:20:25.809 00:20:25.809 Label: (null) 00:20:25.809 UUID: 7c176a6f-2819-43d1-86cd-ff28bc140710 00:20:25.809 Node size: 16384 00:20:25.809 Sector size: 4096 00:20:25.809 Filesystem size: 1.99GiB 00:20:25.809 Block group profiles: 00:20:25.809 Data: single 8.00MiB 00:20:25.809 Metadata: DUP 102.00MiB 00:20:25.809 System: DUP 8.00MiB 00:20:25.809 SSD detected: yes 00:20:25.809 Zoned device: no 00:20:25.809 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:20:25.809 Runtime features: free-space-tree 00:20:25.809 Checksum: crc32c 00:20:25.809 Number of devices: 1 00:20:25.809 Devices: 00:20:25.809 ID SIZE PATH 00:20:25.809 1 1.99GiB /dev/sda1 00:20:25.809 00:20:25.809 16:58:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:20:25.809 16:58:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:20:25.809 16:58:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:20:25.809 16:58:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:20:25.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:20:25.809 fio-3.35 00:20:25.809 Starting 1 thread 00:20:25.809 job0: Laying out IO file (1 file / 1024MiB) 00:20:43.981 00:20:43.981 job0: (groupid=0, jobs=1): err= 0: pid=65984: Mon Jul 22 16:58:44 2024 00:20:43.982 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(1024MiB/18428msec); 0 zone resets 00:20:43.982 slat (usec): min=7, max=4436, avg=42.08, stdev=81.40 00:20:43.982 clat (usec): min=611, max=14343, avg=4454.77, stdev=1339.54 00:20:43.982 lat (usec): min=666, max=14370, avg=4496.85, stdev=1347.66 00:20:43.982 clat percentiles (usec): 00:20:43.982 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2802], 20.00th=[ 3261], 00:20:43.982 | 30.00th=[ 3720], 40.00th=[ 4113], 50.00th=[ 4424], 60.00th=[ 4752], 00:20:43.982 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 6063], 95.00th=[ 6783], 00:20:43.982 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10683], 99.95th=[11207], 00:20:43.982 | 99.99th=[12518] 00:20:43.982 bw ( KiB/s): min=51072, max=60760, per=99.74%, avg=56751.06, stdev=2763.97, samples=36 00:20:43.982 iops : min=12768, max=15190, avg=14187.75, stdev=690.98, samples=36 00:20:43.982 lat (usec) : 750=0.01%, 1000=0.01% 00:20:43.982 lat (msec) : 2=0.74%, 4=36.23%, 10=62.77%, 20=0.26% 00:20:43.982 cpu : usr=5.33%, sys=32.69%, ctx=49864, majf=0, minf=1 00:20:43.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:43.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:43.982 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.982 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:43.982 00:20:43.982 Run status group 0 (all jobs): 00:20:43.982 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=1024MiB (1074MB), run=18428-18428msec 00:20:43.982 16:58:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:20:43.982 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:43.982 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:43.982 iscsiadm: No active sessions. 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:43.982 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:43.982 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:43.982 [2024-07-22 16:58:45.155445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:20:43.982 File existed. 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:20:43.982 16:58:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:20:43.982 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:20:43.982 fio-3.35 00:20:43.982 Starting 1 thread 00:21:05.907 00:21:05.907 job0: (groupid=0, jobs=1): err= 0: pid=66263: Mon Jul 22 16:59:05 2024 00:21:05.907 read: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(1168MiB/20004msec) 00:21:05.907 slat (usec): min=4, max=2881, avg=11.91, stdev=24.24 00:21:05.907 clat (usec): min=1489, max=36974, avg=4264.24, stdev=1211.93 00:21:05.907 lat (usec): min=1541, max=38059, avg=4276.15, stdev=1217.97 00:21:05.907 clat percentiles (usec): 00:21:05.907 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 3163], 00:21:05.907 | 30.00th=[ 3556], 40.00th=[ 3884], 50.00th=[ 4228], 60.00th=[ 4555], 00:21:05.907 | 70.00th=[ 4883], 80.00th=[ 5276], 90.00th=[ 5735], 95.00th=[ 6063], 00:21:05.907 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[10945], 99.95th=[19530], 00:21:05.907 | 99.99th=[30540] 00:21:05.907 bw ( KiB/s): min=45280, max=64728, per=100.00%, avg=59880.21, stdev=3347.32, samples=39 00:21:05.907 iops : min=11320, max=16182, avg=14970.05, stdev=836.83, samples=39 00:21:05.907 lat (msec) : 2=0.05%, 4=43.15%, 10=56.67%, 20=0.08%, 50=0.05% 00:21:05.907 cpu : usr=5.32%, sys=16.54%, ctx=43555, majf=0, minf=65 00:21:05.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:05.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:05.907 issued rwts: total=299001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:05.907 00:21:05.907 Run status group 0 (all jobs): 00:21:05.907 READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=1168MiB (1225MB), run=20004-20004msec 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:21:05.907 00:21:05.907 real 0m39.546s 00:21:05.907 user 0m2.305s 00:21:05.907 sys 0m9.769s 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:21:05.907 ************************************ 00:21:05.907 END TEST iscsi_tgt_filesystem_btrfs 00:21:05.907 ************************************ 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:21:05.907 ************************************ 00:21:05.907 START TEST iscsi_tgt_filesystem_xfs 00:21:05.907 ************************************ 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:21:05.907 16:59:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:21:05.907 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:21:05.907 = sectsz=4096 attr=2, projid32bit=1 00:21:05.907 = crc=1 finobt=1, sparse=1, rmapbt=0 00:21:05.907 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:21:05.907 data = bsize=4096 blocks=522240, imaxpct=25 00:21:05.907 = sunit=0 swidth=0 blks 00:21:05.907 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:21:05.907 log =internal log bsize=4096 blocks=16384, version=2 00:21:05.907 = sectsz=4096 sunit=1 blks, lazy-count=1 00:21:05.907 realtime =none extsz=4096 blocks=0, rtextents=0 00:21:05.907 Discarding blocks...Done. 00:21:05.907 16:59:06 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:21:05.907 16:59:06 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:21:05.907 16:59:06 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:21:05.907 16:59:06 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:21:05.907 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:21:05.907 fio-3.35 00:21:05.907 Starting 1 thread 00:21:05.907 job0: Laying out IO file (1 file / 1024MiB) 00:21:24.027 00:21:24.027 job0: (groupid=0, jobs=1): err= 0: pid=66525: Mon Jul 22 16:59:25 2024 00:21:24.027 write: IOPS=14.4k, BW=56.1MiB/s (58.9MB/s)(1024MiB/18237msec); 0 zone resets 00:21:24.027 slat (usec): min=2, max=5029, avg=23.23, stdev=135.17 00:21:24.027 clat (usec): min=1126, max=13783, avg=4427.64, stdev=1164.31 00:21:24.027 lat (usec): min=1132, max=13811, avg=4450.87, stdev=1172.94 00:21:24.027 clat percentiles (usec): 00:21:24.027 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2900], 20.00th=[ 3261], 00:21:24.027 | 30.00th=[ 3785], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4817], 00:21:24.027 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 5866], 95.00th=[ 6390], 00:21:24.027 | 99.00th=[ 7242], 99.50th=[ 7701], 99.90th=[ 8848], 99.95th=[ 9372], 00:21:24.027 | 99.99th=[11207] 00:21:24.027 bw ( KiB/s): min=49880, max=61432, per=100.00%, avg=57666.03, stdev=2510.46, samples=36 00:21:24.027 iops : min=12470, max=15358, avg=14416.50, stdev=627.62, samples=36 00:21:24.027 lat (msec) : 2=0.02%, 4=34.61%, 10=65.35%, 20=0.03% 00:21:24.027 cpu : usr=4.52%, sys=11.08%, ctx=22617, majf=0, minf=1 00:21:24.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:24.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:24.027 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:24.027 00:21:24.027 Run status group 0 (all jobs): 00:21:24.027 WRITE: bw=56.1MiB/s (58.9MB/s), 56.1MiB/s-56.1MiB/s (58.9MB/s-58.9MB/s), io=1024MiB (1074MB), run=18237-18237msec 00:21:24.027 00:21:24.027 Disk stats (read/write): 00:21:24.027 sda: ios=0/259920, merge=0/950, ticks=0/1022141, in_queue=1022141, util=99.56% 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:21:24.027 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:24.027 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:24.027 iscsiadm: No active sessions. 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:24.027 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:24.027 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:24.027 [2024-07-22 16:59:25.307899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:21:24.027 File existed. 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:21:24.027 16:59:25 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:21:24.027 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:21:24.027 fio-3.35 00:21:24.027 Starting 1 thread 00:21:45.956 00:21:45.956 job0: (groupid=0, jobs=1): err= 0: pid=66762: Mon Jul 22 16:59:45 2024 00:21:45.956 read: IOPS=14.6k, BW=57.1MiB/s (59.8MB/s)(1141MiB/20004msec) 00:21:45.956 slat (usec): min=2, max=573, avg= 8.77, stdev= 9.43 00:21:45.956 clat (usec): min=1267, max=12094, avg=4370.72, stdev=1167.00 00:21:45.956 lat (usec): min=1350, max=12100, avg=4379.49, stdev=1166.36 00:21:45.956 clat percentiles (usec): 00:21:45.956 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 3294], 00:21:45.956 | 30.00th=[ 3589], 40.00th=[ 4047], 50.00th=[ 4293], 60.00th=[ 4686], 00:21:45.956 | 70.00th=[ 5014], 80.00th=[ 5473], 90.00th=[ 5866], 95.00th=[ 6325], 00:21:45.956 | 99.00th=[ 7308], 99.50th=[ 7963], 99.90th=[ 9110], 99.95th=[ 9503], 00:21:45.956 | 99.99th=[10159] 00:21:45.956 bw ( KiB/s): min=48864, max=61688, per=99.92%, avg=58385.23, stdev=2870.35, samples=39 00:21:45.957 iops : min=12218, max=15422, avg=14596.36, stdev=717.41, samples=39 00:21:45.957 lat (msec) : 2=0.03%, 4=39.15%, 10=60.80%, 20=0.02% 00:21:45.957 cpu : usr=5.29%, sys=12.73%, ctx=26348, majf=0, minf=65 00:21:45.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:45.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:45.957 issued rwts: total=292213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.957 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:45.957 00:21:45.957 Run status group 0 (all jobs): 00:21:45.957 READ: bw=57.1MiB/s (59.8MB/s), 57.1MiB/s-57.1MiB/s (59.8MB/s-59.8MB/s), io=1141MiB (1197MB), run=20004-20004msec 00:21:45.957 00:21:45.957 Disk stats (read/write): 00:21:45.957 sda: ios=288872/0, merge=1409/0, ticks=1223593/0, in_queue=1223593, util=99.60% 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:21:45.957 00:21:45.957 real 0m40.144s 00:21:45.957 user 0m2.176s 00:21:45.957 sys 0m4.800s 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.957 ************************************ 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 END TEST iscsi_tgt_filesystem_xfs 00:21:45.957 ************************************ 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:21:45.957 Cleaning up iSCSI connection 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:45.957 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:45.957 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:21:45.957 INFO: Removing lvol bdev 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 [2024-07-22 16:59:45.821966] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7975a657-51aa-4a3e-82b5-4cd00aa60234) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.957 INFO: Removing lvol stores 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.957 INFO: Removing NVMe 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 65239 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 65239 ']' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 65239 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65239 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.957 killing process with pid 65239 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65239' 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 65239 00:21:45.957 16:59:45 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 65239 00:21:46.893 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:21:46.893 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:46.893 00:21:46.893 real 2m6.136s 00:21:46.893 user 8m4.542s 00:21:46.893 sys 0m33.804s 00:21:46.893 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.893 16:59:48 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:21:46.893 ************************************ 00:21:46.893 END TEST iscsi_tgt_filesystem 00:21:46.893 ************************************ 00:21:46.893 16:59:48 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:21:46.893 16:59:48 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:21:46.893 16:59:48 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:46.893 16:59:48 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.893 16:59:48 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:46.893 ************************************ 00:21:46.893 START TEST chap_during_discovery 00:21:46.893 ************************************ 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:21:46.893 * Looking for test storage... 00:21:46.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:46.893 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=67082 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:21:46.894 iSCSI target launched. pid: 67082 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 67082' 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 67082 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 67082 ']' 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.894 16:59:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.894 [2024-07-22 16:59:48.420449] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:46.894 [2024-07-22 16:59:48.420639] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67082 ] 00:21:47.459 [2024-07-22 16:59:48.771702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.459 [2024-07-22 16:59:49.017522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.024 16:59:49 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.590 iscsi_tgt is listening. Running tests... 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.590 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.848 Malloc0 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.848 16:59:50 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:21:49.783 configuring target for bideerctional authentication 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.783 executing discovery without adding credential to initiator - we expect failure 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:49.783 iscsiadm: Login failed to authenticate with target 00:21:49.783 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:21:49.783 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:21:49.783 configuring initiator for bideerctional authentication 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:21:49.783 iscsiadm: No matching sessions found 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:21:49.783 iscsiadm: No records found 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:21:49.783 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:21:49.784 16:59:51 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:21:53.107 16:59:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:21:53.108 16:59:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:21:54.055 16:59:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:21:57.338 16:59:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:21:57.338 16:59:58 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:21:58.272 executing discovery with adding credential to initiator 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:58.272 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:21:58.272 DONE 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:21:58.272 iscsiadm: No matching sessions found 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:21:58.272 16:59:59 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:22:01.615 17:00:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:22:01.615 17:00:02 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 67082 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 67082 ']' 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 67082 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67082 00:22:02.179 killing process with pid 67082 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67082' 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 67082 00:22:02.179 17:00:03 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 67082 00:22:04.705 17:00:06 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:22:04.705 17:00:06 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:04.705 00:22:04.705 real 0m18.070s 00:22:04.705 user 0m17.830s 00:22:04.705 sys 0m0.926s 00:22:04.705 17:00:06 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.705 17:00:06 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.705 ************************************ 00:22:04.705 END TEST chap_during_discovery 00:22:04.705 ************************************ 00:22:04.963 17:00:06 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:22:04.963 17:00:06 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:22:04.963 17:00:06 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:04.963 17:00:06 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.963 17:00:06 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:04.963 ************************************ 00:22:04.963 START TEST chap_mutual_auth 00:22:04.963 ************************************ 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:22:04.963 * Looking for test storage... 00:22:04.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=67377 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:22:04.963 iSCSI target launched. pid: 67377 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 67377' 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 67377 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 67377 ']' 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.963 17:00:06 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.963 [2024-07-22 17:00:06.555375] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:04.963 [2024-07-22 17:00:06.555559] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67377 ] 00:22:05.528 [2024-07-22 17:00:06.913917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.786 [2024-07-22 17:00:07.156589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.045 17:00:07 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.611 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.611 iscsi_tgt is listening. Running tests... 00:22:06.611 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:06.612 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:22:06.612 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.612 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.870 Malloc0 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.870 17:00:08 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:22:07.869 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:07.869 configuring target for authentication 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.870 executing discovery without adding credential to initiator - we expect failure 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:22:07.870 configuring initiator with biderectional authentication 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:22:07.870 iscsiadm: No matching sessions found 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:22:07.870 iscsiadm: No records found 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:22:07.870 17:00:09 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:22:11.215 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:22:11.215 17:00:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:22:12.169 17:00:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:22:15.460 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:22:15.460 17:00:16 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:22:16.026 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:22:16.026 executing discovery - target should not be discovered since the -m option was not used 00:22:16.026 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:22:16.026 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:22:16.026 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:16.285 [2024-07-22 17:00:17.648232] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:22:16.285 [2024-07-22 17:00:17.648335] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:22:16.285 iscsiadm: Login failed to authenticate with target 00:22:16.285 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:22:16.285 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:22:16.285 configuring target for authentication with the -m option 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.285 executing discovery: 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:16.285 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:22:16.285 executing login: 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:22:16.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:22:16.285 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:22:16.285 DONE 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:22:16.285 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:22:16.285 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:22:16.285 17:00:17 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:22:19.583 17:00:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:22:19.583 17:00:20 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 67377 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 67377 ']' 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 67377 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67377 00:22:20.523 killing process with pid 67377 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67377' 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 67377 00:22:20.523 17:00:21 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 67377 00:22:23.052 17:00:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:22:23.052 17:00:24 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:23.052 00:22:23.052 real 0m18.214s 00:22:23.052 user 0m17.963s 00:22:23.052 sys 0m0.941s 00:22:23.052 17:00:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.052 ************************************ 00:22:23.052 END TEST chap_mutual_auth 00:22:23.052 ************************************ 00:22:23.052 17:00:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:22:23.052 17:00:24 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:22:23.052 17:00:24 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:22:23.052 17:00:24 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:23.052 17:00:24 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.052 17:00:24 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:23.052 ************************************ 00:22:23.052 START TEST iscsi_tgt_reset 00:22:23.052 ************************************ 00:22:23.052 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:22:23.052 * Looking for test storage... 00:22:23.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=67700 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 67700' 00:22:23.311 Process pid: 67700 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 67700 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 67700 ']' 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.311 17:00:24 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:23.311 [2024-07-22 17:00:24.816488] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:23.311 [2024-07-22 17:00:24.816884] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67700 ] 00:22:23.569 [2024-07-22 17:00:24.989758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.827 [2024-07-22 17:00:25.311191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.394 17:00:25 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.371 iscsi_tgt is listening. Running tests... 00:22:25.371 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.371 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:25.371 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:22:25.371 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.371 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.372 Malloc0 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.372 17:00:26 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:22:26.306 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:26.306 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:22:26.306 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:26.306 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:22:26.307 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:26.307 [2024-07-22 17:00:27.873121] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:22:26.307 FIO pid: 67774 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=67774 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 67774' 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:22:26.307 17:00:27 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:22:26.307 [global] 00:22:26.307 thread=1 00:22:26.307 invalidate=1 00:22:26.307 rw=read 00:22:26.307 time_based=1 00:22:26.307 runtime=60 00:22:26.307 ioengine=libaio 00:22:26.307 direct=1 00:22:26.307 bs=512 00:22:26.307 iodepth=1 00:22:26.307 norandommap=1 00:22:26.307 numjobs=1 00:22:26.307 00:22:26.307 [job0] 00:22:26.307 filename=/dev/sda 00:22:26.565 queue_depth set to 113 (sda) 00:22:26.565 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:22:26.565 fio-3.35 00:22:26.565 Starting 1 thread 00:22:27.536 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67700 00:22:27.536 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67774 00:22:27.536 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:22:27.536 [2024-07-22 17:00:28.896020] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:22:27.536 [2024-07-22 17:00:28.896195] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:22:27.536 17:00:28 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:22:27.536 [2024-07-22 17:00:28.897641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:28.469 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67700 00:22:28.469 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67774 00:22:28.469 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:22:28.469 17:00:29 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:22:29.410 17:00:30 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67700 00:22:29.410 17:00:30 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67774 00:22:29.410 17:00:30 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:22:29.410 [2024-07-22 17:00:30.909635] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:22:29.410 [2024-07-22 17:00:30.909739] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:22:29.410 17:00:30 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:22:29.410 [2024-07-22 17:00:30.911239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:30.343 17:00:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67700 00:22:30.343 17:00:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67774 00:22:30.343 17:00:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:22:30.343 17:00:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:22:31.726 17:00:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67700 00:22:31.726 17:00:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67774 00:22:31.726 17:00:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:22:31.726 [2024-07-22 17:00:32.921401] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:22:31.726 [2024-07-22 17:00:32.921517] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:22:31.726 17:00:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:22:31.726 [2024-07-22 17:00:32.922981] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67700 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67774 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 67774 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 67774 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:22:32.673 Cleaning up iSCSI connection 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:22:32.673 fio: pid=67807, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:22:32.673 fio: io_u error on file /dev/sda: No such device: read offset=27294720, buflen=512 00:22:32.673 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:22:32.673 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:22:32.673 17:00:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:22:32.673 00:22:32.673 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=67807: Mon Jul 22 17:00:33 2024 00:22:32.673 read: IOPS=9244, BW=4622KiB/s (4733kB/s)(26.0MiB/5767msec) 00:22:32.673 slat (usec): min=3, max=1018, avg= 7.14, stdev= 6.02 00:22:32.673 clat (usec): min=2, max=3373, avg=100.16, stdev=31.79 00:22:32.673 lat (usec): min=82, max=3381, avg=107.29, stdev=32.23 00:22:32.673 clat percentiles (usec): 00:22:32.673 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:22:32.673 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 99], 00:22:32.673 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 131], 00:22:32.673 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 255], 99.95th=[ 388], 00:22:32.673 | 99.99th=[ 1221] 00:22:32.673 bw ( KiB/s): min= 4467, max= 4848, per=100.00%, avg=4636.73, stdev=148.08, samples=11 00:22:32.673 iops : min= 8934, max= 9696, avg=9273.45, stdev=296.16, samples=11 00:22:32.673 lat (usec) : 4=0.01%, 10=0.01%, 100=61.98%, 250=37.91%, 500=0.07% 00:22:32.673 lat (usec) : 750=0.01%, 1000=0.01% 00:22:32.673 lat (msec) : 2=0.01%, 4=0.01% 00:22:32.673 cpu : usr=2.77%, sys=8.57%, ctx=53720, majf=0, minf=1 00:22:32.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.673 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.673 issued rwts: total=53311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:32.673 00:22:32.673 Run status group 0 (all jobs): 00:22:32.673 READ: bw=4622KiB/s (4733kB/s), 4622KiB/s-4622KiB/s (4733kB/s-4733kB/s), io=26.0MiB (27.3MB), run=5767-5767msec 00:22:32.673 00:22:32.673 Disk stats (read/write): 00:22:32.673 sda: ios=52359/0, merge=0/0, ticks=5198/0, in_queue=5198, util=98.29% 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 67700 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 67700 ']' 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 67700 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67700 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67700' 00:22:32.673 killing process with pid 67700 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 67700 00:22:32.673 17:00:34 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 67700 00:22:35.204 17:00:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:22:35.204 17:00:36 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:35.204 00:22:35.204 real 0m12.070s 00:22:35.204 user 0m9.194s 00:22:35.204 sys 0m2.564s 00:22:35.204 17:00:36 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 ************************************ 00:22:35.205 END TEST iscsi_tgt_reset 00:22:35.205 ************************************ 00:22:35.205 17:00:36 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:22:35.205 17:00:36 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:22:35.205 17:00:36 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:35.205 17:00:36 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.205 17:00:36 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 ************************************ 00:22:35.205 START TEST iscsi_tgt_rpc_config 00:22:35.205 ************************************ 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:22:35.205 * Looking for test storage... 00:22:35.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 Process pid: 67974 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=67974 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 67974' 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 67974 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 67974 ']' 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.205 17:00:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:22:35.463 [2024-07-22 17:00:36.960937] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:35.463 [2024-07-22 17:00:36.961207] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67974 ] 00:22:35.721 [2024-07-22 17:00:37.141141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.979 [2024-07-22 17:00:37.433263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.546 17:00:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.546 17:00:37 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:22:36.546 17:00:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=67990 00:22:36.546 17:00:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:22:36.546 17:00:37 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:22:36.847 17:00:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 67990 00:22:36.847 PID TTY STAT TIME COMMAND 00:22:36.847 67990 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:22:36.847 17:00:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:37.780 17:00:39 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:39.175 iscsi_tgt is listening. Running tests... 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 67990 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 67990 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 67990 00:22:39.175 PID TTY STAT TIME COMMAND 00:22:39.175 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=68026 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:22:39.176 17:00:40 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 68026 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 68026 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 68026 00:22:40.111 PID TTY STAT TIME COMMAND 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:22:40.111 17:00:41 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:23:12.219 [2024-07-22 17:01:09.251672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:12.219 [2024-07-22 17:01:12.513628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:12.800 verify_log_flag_rpc_methods passed 00:23:12.800 create_malloc_bdevs_rpc_methods passed 00:23:12.800 verify_portal_groups_rpc_methods passed 00:23:12.800 verify_initiator_groups_rpc_method passed. 00:23:12.800 This issue will be fixed later. 00:23:12.800 verify_target_nodes_rpc_methods passed. 00:23:12.800 verify_scsi_devices_rpc_methods passed 00:23:12.800 verify_iscsi_connection_rpc_methods passed 00:23:12.800 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:12.800 [ 00:23:12.800 { 00:23:12.800 "name": "Malloc0", 00:23:12.800 "aliases": [ 00:23:12.800 "79cf4903-b7e4-4db1-abd4-32ca8fc5a7bb" 00:23:12.800 ], 00:23:12.800 "product_name": "Malloc disk", 00:23:12.800 "block_size": 512, 00:23:12.800 "num_blocks": 131072, 00:23:12.800 "uuid": "79cf4903-b7e4-4db1-abd4-32ca8fc5a7bb", 00:23:12.800 "assigned_rate_limits": { 00:23:12.800 "rw_ios_per_sec": 0, 00:23:12.800 "rw_mbytes_per_sec": 0, 00:23:12.800 "r_mbytes_per_sec": 0, 00:23:12.800 "w_mbytes_per_sec": 0 00:23:12.800 }, 00:23:12.800 "claimed": false, 00:23:12.800 "zoned": false, 00:23:12.800 "supported_io_types": { 00:23:12.800 "read": true, 00:23:12.800 "write": true, 00:23:12.800 "unmap": true, 00:23:12.800 "flush": true, 00:23:12.800 "reset": true, 00:23:12.800 "nvme_admin": false, 00:23:12.800 "nvme_io": false, 00:23:12.800 "nvme_io_md": false, 00:23:12.800 "write_zeroes": true, 00:23:12.800 "zcopy": true, 00:23:12.800 "get_zone_info": false, 00:23:12.800 "zone_management": false, 00:23:12.800 "zone_append": false, 00:23:12.800 "compare": false, 00:23:12.800 "compare_and_write": false, 00:23:12.800 "abort": true, 00:23:12.800 "seek_hole": false, 00:23:12.800 "seek_data": false, 00:23:12.800 "copy": true, 00:23:12.800 "nvme_iov_md": false 00:23:12.800 }, 00:23:12.800 "memory_domains": [ 00:23:12.800 { 00:23:12.800 "dma_device_id": "system", 00:23:12.800 "dma_device_type": 1 00:23:12.800 }, 00:23:12.800 { 00:23:12.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.800 "dma_device_type": 2 00:23:12.800 } 00:23:12.800 ], 00:23:12.800 "driver_specific": {} 00:23:12.800 }, 00:23:12.800 { 00:23:12.800 "name": "Malloc1", 00:23:12.800 "aliases": [ 00:23:12.800 "beb6b45b-16a4-4ea6-897d-6f71a914b839" 00:23:12.800 ], 00:23:12.800 "product_name": "Malloc disk", 00:23:12.800 "block_size": 512, 00:23:12.800 "num_blocks": 131072, 00:23:12.800 "uuid": "beb6b45b-16a4-4ea6-897d-6f71a914b839", 00:23:12.800 "assigned_rate_limits": { 00:23:12.800 "rw_ios_per_sec": 0, 00:23:12.800 "rw_mbytes_per_sec": 0, 00:23:12.800 "r_mbytes_per_sec": 0, 00:23:12.800 "w_mbytes_per_sec": 0 00:23:12.800 }, 00:23:12.800 "claimed": false, 00:23:12.800 "zoned": false, 00:23:12.800 "supported_io_types": { 00:23:12.800 "read": true, 00:23:12.800 "write": true, 00:23:12.800 "unmap": true, 00:23:12.800 "flush": true, 00:23:12.800 "reset": true, 00:23:12.800 "nvme_admin": false, 00:23:12.800 "nvme_io": false, 00:23:12.800 "nvme_io_md": false, 00:23:12.800 "write_zeroes": true, 00:23:12.800 "zcopy": true, 00:23:12.800 "get_zone_info": false, 00:23:12.800 "zone_management": false, 00:23:12.800 "zone_append": false, 00:23:12.800 "compare": false, 00:23:12.800 "compare_and_write": false, 00:23:12.800 "abort": true, 00:23:12.800 "seek_hole": false, 00:23:12.800 "seek_data": false, 00:23:12.800 "copy": true, 00:23:12.800 "nvme_iov_md": false 00:23:12.800 }, 00:23:12.800 "memory_domains": [ 00:23:12.800 { 00:23:12.800 "dma_device_id": "system", 00:23:12.800 "dma_device_type": 1 00:23:12.800 }, 00:23:12.800 { 00:23:12.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.801 "dma_device_type": 2 00:23:12.801 } 00:23:12.801 ], 00:23:12.801 "driver_specific": {} 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "name": "Malloc2", 00:23:12.801 "aliases": [ 00:23:12.801 "9526bf61-3f7b-4cd3-925a-7bb1f842e93b" 00:23:12.801 ], 00:23:12.801 "product_name": "Malloc disk", 00:23:12.801 "block_size": 512, 00:23:12.801 "num_blocks": 131072, 00:23:12.801 "uuid": "9526bf61-3f7b-4cd3-925a-7bb1f842e93b", 00:23:12.801 "assigned_rate_limits": { 00:23:12.801 "rw_ios_per_sec": 0, 00:23:12.801 "rw_mbytes_per_sec": 0, 00:23:12.801 "r_mbytes_per_sec": 0, 00:23:12.801 "w_mbytes_per_sec": 0 00:23:12.801 }, 00:23:12.801 "claimed": false, 00:23:12.801 "zoned": false, 00:23:12.801 "supported_io_types": { 00:23:12.801 "read": true, 00:23:12.801 "write": true, 00:23:12.801 "unmap": true, 00:23:12.801 "flush": true, 00:23:12.801 "reset": true, 00:23:12.801 "nvme_admin": false, 00:23:12.801 "nvme_io": false, 00:23:12.801 "nvme_io_md": false, 00:23:12.801 "write_zeroes": true, 00:23:12.801 "zcopy": true, 00:23:12.801 "get_zone_info": false, 00:23:12.801 "zone_management": false, 00:23:12.801 "zone_append": false, 00:23:12.801 "compare": false, 00:23:12.801 "compare_and_write": false, 00:23:12.801 "abort": true, 00:23:12.801 "seek_hole": false, 00:23:12.801 "seek_data": false, 00:23:12.801 "copy": true, 00:23:12.801 "nvme_iov_md": false 00:23:12.801 }, 00:23:12.801 "memory_domains": [ 00:23:12.801 { 00:23:12.801 "dma_device_id": "system", 00:23:12.801 "dma_device_type": 1 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.801 "dma_device_type": 2 00:23:12.801 } 00:23:12.801 ], 00:23:12.801 "driver_specific": {} 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "name": "Malloc3", 00:23:12.801 "aliases": [ 00:23:12.801 "784c6aa6-3c37-4c66-b0fa-2f994e5387db" 00:23:12.801 ], 00:23:12.801 "product_name": "Malloc disk", 00:23:12.801 "block_size": 512, 00:23:12.801 "num_blocks": 131072, 00:23:12.801 "uuid": "784c6aa6-3c37-4c66-b0fa-2f994e5387db", 00:23:12.801 "assigned_rate_limits": { 00:23:12.801 "rw_ios_per_sec": 0, 00:23:12.801 "rw_mbytes_per_sec": 0, 00:23:12.801 "r_mbytes_per_sec": 0, 00:23:12.801 "w_mbytes_per_sec": 0 00:23:12.801 }, 00:23:12.801 "claimed": false, 00:23:12.801 "zoned": false, 00:23:12.801 "supported_io_types": { 00:23:12.801 "read": true, 00:23:12.801 "write": true, 00:23:12.801 "unmap": true, 00:23:12.801 "flush": true, 00:23:12.801 "reset": true, 00:23:12.801 "nvme_admin": false, 00:23:12.801 "nvme_io": false, 00:23:12.801 "nvme_io_md": false, 00:23:12.801 "write_zeroes": true, 00:23:12.801 "zcopy": true, 00:23:12.801 "get_zone_info": false, 00:23:12.801 "zone_management": false, 00:23:12.801 "zone_append": false, 00:23:12.801 "compare": false, 00:23:12.801 "compare_and_write": false, 00:23:12.801 "abort": true, 00:23:12.801 "seek_hole": false, 00:23:12.801 "seek_data": false, 00:23:12.801 "copy": true, 00:23:12.801 "nvme_iov_md": false 00:23:12.801 }, 00:23:12.801 "memory_domains": [ 00:23:12.801 { 00:23:12.801 "dma_device_id": "system", 00:23:12.801 "dma_device_type": 1 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.801 "dma_device_type": 2 00:23:12.801 } 00:23:12.801 ], 00:23:12.801 "driver_specific": {} 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "name": "Malloc4", 00:23:12.801 "aliases": [ 00:23:12.801 "3f798d0c-799b-4012-ba93-8f5d05eca24b" 00:23:12.801 ], 00:23:12.801 "product_name": "Malloc disk", 00:23:12.801 "block_size": 512, 00:23:12.801 "num_blocks": 131072, 00:23:12.801 "uuid": "3f798d0c-799b-4012-ba93-8f5d05eca24b", 00:23:12.801 "assigned_rate_limits": { 00:23:12.801 "rw_ios_per_sec": 0, 00:23:12.801 "rw_mbytes_per_sec": 0, 00:23:12.801 "r_mbytes_per_sec": 0, 00:23:12.801 "w_mbytes_per_sec": 0 00:23:12.801 }, 00:23:12.801 "claimed": false, 00:23:12.801 "zoned": false, 00:23:12.801 "supported_io_types": { 00:23:12.801 "read": true, 00:23:12.801 "write": true, 00:23:12.801 "unmap": true, 00:23:12.801 "flush": true, 00:23:12.801 "reset": true, 00:23:12.801 "nvme_admin": false, 00:23:12.801 "nvme_io": false, 00:23:12.801 "nvme_io_md": false, 00:23:12.801 "write_zeroes": true, 00:23:12.801 "zcopy": true, 00:23:12.801 "get_zone_info": false, 00:23:12.801 "zone_management": false, 00:23:12.801 "zone_append": false, 00:23:12.801 "compare": false, 00:23:12.801 "compare_and_write": false, 00:23:12.801 "abort": true, 00:23:12.801 "seek_hole": false, 00:23:12.801 "seek_data": false, 00:23:12.801 "copy": true, 00:23:12.801 "nvme_iov_md": false 00:23:12.801 }, 00:23:12.801 "memory_domains": [ 00:23:12.801 { 00:23:12.801 "dma_device_id": "system", 00:23:12.801 "dma_device_type": 1 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.801 "dma_device_type": 2 00:23:12.801 } 00:23:12.801 ], 00:23:12.801 "driver_specific": {} 00:23:12.801 }, 00:23:12.801 { 00:23:12.801 "name": "Malloc5", 00:23:12.801 "aliases": [ 00:23:12.801 "7eb5b3eb-6ca5-4e5f-b689-25dbea364f9c" 00:23:12.801 ], 00:23:12.801 "product_name": "Malloc disk", 00:23:12.801 "block_size": 512, 00:23:12.801 "num_blocks": 131072, 00:23:12.801 "uuid": "7eb5b3eb-6ca5-4e5f-b689-25dbea364f9c", 00:23:12.801 "assigned_rate_limits": { 00:23:12.801 "rw_ios_per_sec": 0, 00:23:12.801 "rw_mbytes_per_sec": 0, 00:23:12.801 "r_mbytes_per_sec": 0, 00:23:12.801 "w_mbytes_per_sec": 0 00:23:12.801 }, 00:23:12.801 "claimed": false, 00:23:12.801 "zoned": false, 00:23:12.801 "supported_io_types": { 00:23:12.801 "read": true, 00:23:12.801 "write": true, 00:23:12.801 "unmap": true, 00:23:12.801 "flush": true, 00:23:12.801 "reset": true, 00:23:12.801 "nvme_admin": false, 00:23:12.801 "nvme_io": false, 00:23:12.801 "nvme_io_md": false, 00:23:12.801 "write_zeroes": true, 00:23:12.801 "zcopy": true, 00:23:12.802 "get_zone_info": false, 00:23:12.802 "zone_management": false, 00:23:12.802 "zone_append": false, 00:23:12.802 "compare": false, 00:23:12.802 "compare_and_write": false, 00:23:12.802 "abort": true, 00:23:12.802 "seek_hole": false, 00:23:12.802 "seek_data": false, 00:23:12.802 "copy": true, 00:23:12.802 "nvme_iov_md": false 00:23:12.802 }, 00:23:12.802 "memory_domains": [ 00:23:12.802 { 00:23:12.802 "dma_device_id": "system", 00:23:12.802 "dma_device_type": 1 00:23:12.802 }, 00:23:12.802 { 00:23:12.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.802 "dma_device_type": 2 00:23:12.802 } 00:23:12.802 ], 00:23:12.802 "driver_specific": {} 00:23:12.802 } 00:23:12.802 ] 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:23:12.802 Cleaning up iSCSI connection 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:23:12.802 iscsiadm: No matching sessions found 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:23:12.802 iscsiadm: No records found 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 67974 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 67974 ']' 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 67974 00:23:12.802 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67974 00:23:13.060 killing process with pid 67974 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67974' 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 67974 00:23:13.060 17:01:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 67974 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:16.479 ************************************ 00:23:16.479 END TEST iscsi_tgt_rpc_config 00:23:16.479 ************************************ 00:23:16.479 00:23:16.479 real 0m41.125s 00:23:16.479 user 1m8.715s 00:23:16.479 sys 0m5.174s 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:23:16.479 17:01:17 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:23:16.479 17:01:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:23:16.479 17:01:17 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:16.479 17:01:17 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.479 17:01:17 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:16.479 ************************************ 00:23:16.479 START TEST iscsi_tgt_iscsi_lvol 00:23:16.479 ************************************ 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:23:16.479 * Looking for test storage... 00:23:16.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:16.479 Process pid: 68654 00:23:16.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=68654 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 68654' 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 68654 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 68654 ']' 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.479 17:01:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:16.737 [2024-07-22 17:01:18.115710] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:16.737 [2024-07-22 17:01:18.115896] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68654 ] 00:23:16.737 [2024-07-22 17:01:18.280875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.995 [2024-07-22 17:01:18.564912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.995 [2024-07-22 17:01:18.565059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.995 [2024-07-22 17:01:18.565224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.995 [2024-07-22 17:01:18.565306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.565 17:01:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.565 17:01:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:23:17.565 17:01:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:23:17.826 17:01:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:18.769 iscsi_tgt is listening. Running tests... 00:23:18.769 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:18.769 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:23:18.769 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.769 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:19.028 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:23:19.028 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.028 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:19.028 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:23:19.286 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:23:19.286 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:19.286 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:23:19.286 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:23:19.545 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:23:19.545 17:01:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:19.804 17:01:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:23:19.804 17:01:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:20.378 17:01:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:23:20.378 17:01:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:20.636 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:23:20.636 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:23:20.894 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=395fd5b2-feb5-4789-abf4-9365e319a58a 00:23:20.894 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:20.895 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:20.895 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:20.895 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_1 10 00:23:21.458 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d13f6e99-db80-464e-a460-bd8eee61a98c 00:23:21.458 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d13f6e99-db80-464e-a460-bd8eee61a98c:0 ' 00:23:21.458 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:21.458 17:01:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_2 10 00:23:21.717 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b4da8232-234a-4729-906f-9eac1a343ab1 00:23:21.717 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b4da8232-234a-4729-906f-9eac1a343ab1:1 ' 00:23:21.717 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:21.717 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_3 10 00:23:21.975 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d6c366f4-7150-428f-b4c3-3483de5d9222 00:23:21.975 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d6c366f4-7150-428f-b4c3-3483de5d9222:2 ' 00:23:21.975 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:21.975 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_4 10 00:23:22.233 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ad4072a3-144c-4344-8543-bcac1f472a4d 00:23:22.233 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ad4072a3-144c-4344-8543-bcac1f472a4d:3 ' 00:23:22.233 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:22.233 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_5 10 00:23:22.491 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b4a8be2e-61db-44bd-9498-865dbbadee6a 00:23:22.491 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b4a8be2e-61db-44bd-9498-865dbbadee6a:4 ' 00:23:22.491 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:22.491 17:01:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_6 10 00:23:22.749 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=88d93249-d637-468d-94cc-b076fede7193 00:23:22.749 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='88d93249-d637-468d-94cc-b076fede7193:5 ' 00:23:22.749 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:22.749 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_7 10 00:23:23.007 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=35238282-4663-4a58-a781-2b7dbd91222c 00:23:23.007 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='35238282-4663-4a58-a781-2b7dbd91222c:6 ' 00:23:23.007 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:23.007 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_8 10 00:23:23.266 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bc249f95-5c37-452e-b078-7c7b506b2cff 00:23:23.266 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bc249f95-5c37-452e-b078-7c7b506b2cff:7 ' 00:23:23.266 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:23.266 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_9 10 00:23:23.524 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7aa58008-c79e-4d3d-a5eb-4267eaf64dc9 00:23:23.524 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7aa58008-c79e-4d3d-a5eb-4267eaf64dc9:8 ' 00:23:23.524 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:23.524 17:01:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 395fd5b2-feb5-4789-abf4-9365e319a58a lbd_10 10 00:23:23.783 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8f25b47c-5808-42b1-8d59-a439e71726b7 00:23:23.783 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8f25b47c-5808-42b1-8d59-a439e71726b7:9 ' 00:23:23.783 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'd13f6e99-db80-464e-a460-bd8eee61a98c:0 b4da8232-234a-4729-906f-9eac1a343ab1:1 d6c366f4-7150-428f-b4c3-3483de5d9222:2 ad4072a3-144c-4344-8543-bcac1f472a4d:3 b4a8be2e-61db-44bd-9498-865dbbadee6a:4 88d93249-d637-468d-94cc-b076fede7193:5 35238282-4663-4a58-a781-2b7dbd91222c:6 bc249f95-5c37-452e-b078-7c7b506b2cff:7 7aa58008-c79e-4d3d-a5eb-4267eaf64dc9:8 8f25b47c-5808-42b1-8d59-a439e71726b7:9 ' 1:3 256 -d 00:23:24.041 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:24.041 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:23:24.041 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:23:24.316 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:23:24.316 17:01:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:24.586 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:23:24.586 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:23:25.152 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=14d3aaa3-2d8d-4a39-b353-d043968e9e53 00:23:25.152 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:25.152 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:25.152 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:25.152 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_1 10 00:23:25.410 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4e466e50-9f55-4fdb-8ee4-f5d1f9b61741 00:23:25.410 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4e466e50-9f55-4fdb-8ee4-f5d1f9b61741:0 ' 00:23:25.410 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:25.410 17:01:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_2 10 00:23:25.668 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0bd16b4d-d250-4ee4-964f-cbf0af7894bc 00:23:25.668 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0bd16b4d-d250-4ee4-964f-cbf0af7894bc:1 ' 00:23:25.668 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:25.668 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_3 10 00:23:25.925 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8eeb085b-b032-4677-a9d7-e7f1bfa24917 00:23:25.925 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8eeb085b-b032-4677-a9d7-e7f1bfa24917:2 ' 00:23:25.925 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:25.925 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_4 10 00:23:26.238 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5809157c-317e-432c-b373-9f7fc953f01e 00:23:26.238 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5809157c-317e-432c-b373-9f7fc953f01e:3 ' 00:23:26.238 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:26.238 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_5 10 00:23:26.538 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3bb763af-21e1-432a-864f-a349d02fd105 00:23:26.538 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3bb763af-21e1-432a-864f-a349d02fd105:4 ' 00:23:26.538 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:26.538 17:01:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_6 10 00:23:26.796 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c04d3fd9-6514-4b94-9c85-5d5c2272d478 00:23:26.796 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c04d3fd9-6514-4b94-9c85-5d5c2272d478:5 ' 00:23:26.796 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:26.796 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_7 10 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6e33051-f0fd-49e4-bc62-977f023d47e3 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6e33051-f0fd-49e4-bc62-977f023d47e3:6 ' 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_8 10 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=68a2411b-5f07-43db-8871-cc6828f97591 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='68a2411b-5f07-43db-8871-cc6828f97591:7 ' 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:27.054 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_9 10 00:23:27.312 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7abddebb-0cbc-49a0-b26e-ca2679f254ac 00:23:27.312 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7abddebb-0cbc-49a0-b26e-ca2679f254ac:8 ' 00:23:27.312 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:27.312 17:01:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14d3aaa3-2d8d-4a39-b353-d043968e9e53 lbd_10 10 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9bb8af47-6674-4eb4-9917-afbce26f0931 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9bb8af47-6674-4eb4-9917-afbce26f0931:9 ' 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias '4e466e50-9f55-4fdb-8ee4-f5d1f9b61741:0 0bd16b4d-d250-4ee4-964f-cbf0af7894bc:1 8eeb085b-b032-4677-a9d7-e7f1bfa24917:2 5809157c-317e-432c-b373-9f7fc953f01e:3 3bb763af-21e1-432a-864f-a349d02fd105:4 c04d3fd9-6514-4b94-9c85-5d5c2272d478:5 b6e33051-f0fd-49e4-bc62-977f023d47e3:6 68a2411b-5f07-43db-8871-cc6828f97591:7 7abddebb-0cbc-49a0-b26e-ca2679f254ac:8 9bb8af47-6674-4eb4-9917-afbce26f0931:9 ' 1:4 256 -d 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:23:27.878 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:23:28.136 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:23:28.136 17:01:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:28.727 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:23:28.727 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:23:28.985 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=62d76064-371d-4a95-bf4f-c3837fd8c1d2 00:23:28.985 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:28.985 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:28.985 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:28.985 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_1 10 00:23:29.243 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4c1d1390-1e2d-4a41-9802-71c40b231f45 00:23:29.243 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4c1d1390-1e2d-4a41-9802-71c40b231f45:0 ' 00:23:29.243 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:29.243 17:01:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_2 10 00:23:29.501 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fd1f2b58-9ccf-49d1-9690-12d73803e60c 00:23:29.501 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fd1f2b58-9ccf-49d1-9690-12d73803e60c:1 ' 00:23:29.501 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:29.501 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_3 10 00:23:29.760 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3e51f166-7d51-4c69-a853-0d0762608871 00:23:29.760 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3e51f166-7d51-4c69-a853-0d0762608871:2 ' 00:23:29.760 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:29.760 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_4 10 00:23:30.031 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b54151f5-d85c-4c32-916a-4df39fcb6a3f 00:23:30.032 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b54151f5-d85c-4c32-916a-4df39fcb6a3f:3 ' 00:23:30.032 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:30.032 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_5 10 00:23:30.300 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=73173167-53ae-4227-95db-eb31749fb887 00:23:30.300 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='73173167-53ae-4227-95db-eb31749fb887:4 ' 00:23:30.300 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:30.300 17:01:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_6 10 00:23:30.558 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1fd0f2ab-69e2-425b-8a3d-18ce1ac22dd6 00:23:30.558 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1fd0f2ab-69e2-425b-8a3d-18ce1ac22dd6:5 ' 00:23:30.558 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:30.558 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_7 10 00:23:30.815 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=507ae000-22b8-4528-91ff-85375726491a 00:23:30.815 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='507ae000-22b8-4528-91ff-85375726491a:6 ' 00:23:30.815 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:30.815 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_8 10 00:23:31.073 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=05e7a8cf-99fd-4926-a85d-84434955e6c8 00:23:31.073 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='05e7a8cf-99fd-4926-a85d-84434955e6c8:7 ' 00:23:31.073 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:31.073 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_9 10 00:23:31.331 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2adc98e4-d152-4e3b-8790-ee28dec05bf9 00:23:31.331 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2adc98e4-d152-4e3b-8790-ee28dec05bf9:8 ' 00:23:31.331 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:31.331 17:01:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d76064-371d-4a95-bf4f-c3837fd8c1d2 lbd_10 10 00:23:31.588 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5776109a-3405-4bcc-99c1-e90ce728fda2 00:23:31.588 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5776109a-3405-4bcc-99c1-e90ce728fda2:9 ' 00:23:31.588 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '4c1d1390-1e2d-4a41-9802-71c40b231f45:0 fd1f2b58-9ccf-49d1-9690-12d73803e60c:1 3e51f166-7d51-4c69-a853-0d0762608871:2 b54151f5-d85c-4c32-916a-4df39fcb6a3f:3 73173167-53ae-4227-95db-eb31749fb887:4 1fd0f2ab-69e2-425b-8a3d-18ce1ac22dd6:5 507ae000-22b8-4528-91ff-85375726491a:6 05e7a8cf-99fd-4926-a85d-84434955e6c8:7 2adc98e4-d152-4e3b-8790-ee28dec05bf9:8 5776109a-3405-4bcc-99c1-e90ce728fda2:9 ' 1:5 256 -d 00:23:31.847 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:31.847 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:23:31.847 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:23:32.108 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:23:32.108 17:01:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:32.673 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:23:32.673 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:23:32.930 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 00:23:32.930 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:32.930 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:32.930 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:32.930 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_1 10 00:23:33.496 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4701b74d-af34-40b1-b77f-b1a55e8fb170 00:23:33.496 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4701b74d-af34-40b1-b77f-b1a55e8fb170:0 ' 00:23:33.496 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:33.496 17:01:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_2 10 00:23:33.496 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=937d9296-767d-4e67-b34c-b410caade4c6 00:23:33.755 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='937d9296-767d-4e67-b34c-b410caade4c6:1 ' 00:23:33.755 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:33.755 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_3 10 00:23:34.014 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0af45e5c-cf38-4345-9403-8198c7d0f966 00:23:34.014 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0af45e5c-cf38-4345-9403-8198c7d0f966:2 ' 00:23:34.014 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:34.014 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_4 10 00:23:34.275 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9269a867-e343-4aaa-9e77-cb321a4307b8 00:23:34.275 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9269a867-e343-4aaa-9e77-cb321a4307b8:3 ' 00:23:34.275 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:34.275 17:01:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_5 10 00:23:34.534 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4ba9a1bc-95b4-427b-bb9b-ed5ffc7e4607 00:23:34.534 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4ba9a1bc-95b4-427b-bb9b-ed5ffc7e4607:4 ' 00:23:34.534 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:34.534 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_6 10 00:23:34.791 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c96b521f-8d6c-480c-b95b-a3e360167ce2 00:23:34.791 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c96b521f-8d6c-480c-b95b-a3e360167ce2:5 ' 00:23:34.791 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:34.791 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_7 10 00:23:35.049 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c59065a6-d796-4a95-8967-f7e18b0b9187 00:23:35.049 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c59065a6-d796-4a95-8967-f7e18b0b9187:6 ' 00:23:35.049 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:35.049 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_8 10 00:23:35.308 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c8d8e4cd-a6fb-4392-9a78-69d2a93b3a22 00:23:35.308 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c8d8e4cd-a6fb-4392-9a78-69d2a93b3a22:7 ' 00:23:35.308 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:35.308 17:01:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_9 10 00:23:35.567 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=979f2c74-e4d1-4288-a70d-e84dfb62292f 00:23:35.567 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='979f2c74-e4d1-4288-a70d-e84dfb62292f:8 ' 00:23:35.567 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:35.567 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27dd3a3c-5b17-49d6-92f6-fec39d9e00b0 lbd_10 10 00:23:35.825 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2038c1ef-a7fe-434b-999b-4320b9b786d6 00:23:35.825 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2038c1ef-a7fe-434b-999b-4320b9b786d6:9 ' 00:23:35.825 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias '4701b74d-af34-40b1-b77f-b1a55e8fb170:0 937d9296-767d-4e67-b34c-b410caade4c6:1 0af45e5c-cf38-4345-9403-8198c7d0f966:2 9269a867-e343-4aaa-9e77-cb321a4307b8:3 4ba9a1bc-95b4-427b-bb9b-ed5ffc7e4607:4 c96b521f-8d6c-480c-b95b-a3e360167ce2:5 c59065a6-d796-4a95-8967-f7e18b0b9187:6 c8d8e4cd-a6fb-4392-9a78-69d2a93b3a22:7 979f2c74-e4d1-4288-a70d-e84dfb62292f:8 2038c1ef-a7fe-434b-999b-4320b9b786d6:9 ' 1:6 256 -d 00:23:36.082 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:36.082 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:23:36.082 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:23:36.340 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:23:36.340 17:01:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:36.906 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:23:36.906 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=93879ad7-8b5c-4ade-bc16-44a57692c747 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_1 10 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8eafc688-0b5c-49b6-a659-6b0e4398344b 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8eafc688-0b5c-49b6-a659-6b0e4398344b:0 ' 00:23:37.164 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:37.436 17:01:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_2 10 00:23:37.437 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c601349c-18e9-4396-8ed6-c8c1cc93cdbb 00:23:37.437 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c601349c-18e9-4396-8ed6-c8c1cc93cdbb:1 ' 00:23:37.437 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:37.437 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_3 10 00:23:37.694 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=70009d08-9e9c-4f6d-adf6-7eb16e5e8050 00:23:37.694 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='70009d08-9e9c-4f6d-adf6-7eb16e5e8050:2 ' 00:23:37.694 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:37.694 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_4 10 00:23:38.014 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=909b9130-d657-48a2-b47f-ab609af18b1f 00:23:38.014 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='909b9130-d657-48a2-b47f-ab609af18b1f:3 ' 00:23:38.014 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:38.014 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_5 10 00:23:38.274 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d90be811-4c0e-4733-8576-e734edd7ff0f 00:23:38.274 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d90be811-4c0e-4733-8576-e734edd7ff0f:4 ' 00:23:38.274 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:38.274 17:01:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_6 10 00:23:38.531 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=35ad4374-3c51-4735-a63c-21e35a932c67 00:23:38.531 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='35ad4374-3c51-4735-a63c-21e35a932c67:5 ' 00:23:38.531 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:38.531 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_7 10 00:23:38.789 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d49ad5b8-11c2-485a-ba58-de42fdf5dc22 00:23:38.789 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d49ad5b8-11c2-485a-ba58-de42fdf5dc22:6 ' 00:23:38.789 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:38.789 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_8 10 00:23:39.048 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=39638535-7575-4c07-9c5f-b71abfb9a488 00:23:39.048 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='39638535-7575-4c07-9c5f-b71abfb9a488:7 ' 00:23:39.048 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:39.048 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_9 10 00:23:39.626 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=66f3a466-cca9-43eb-856b-99322b7dabde 00:23:39.626 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='66f3a466-cca9-43eb-856b-99322b7dabde:8 ' 00:23:39.626 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:39.626 17:01:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93879ad7-8b5c-4ade-bc16-44a57692c747 lbd_10 10 00:23:39.892 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e3e8b2e8-f239-471f-a385-34046536b309 00:23:39.892 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e3e8b2e8-f239-471f-a385-34046536b309:9 ' 00:23:39.892 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias '8eafc688-0b5c-49b6-a659-6b0e4398344b:0 c601349c-18e9-4396-8ed6-c8c1cc93cdbb:1 70009d08-9e9c-4f6d-adf6-7eb16e5e8050:2 909b9130-d657-48a2-b47f-ab609af18b1f:3 d90be811-4c0e-4733-8576-e734edd7ff0f:4 35ad4374-3c51-4735-a63c-21e35a932c67:5 d49ad5b8-11c2-485a-ba58-de42fdf5dc22:6 39638535-7575-4c07-9c5f-b71abfb9a488:7 66f3a466-cca9-43eb-856b-99322b7dabde:8 e3e8b2e8-f239-471f-a385-34046536b309:9 ' 1:7 256 -d 00:23:40.151 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:40.151 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:23:40.151 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:23:40.409 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:23:40.409 17:01:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:40.666 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:23:40.666 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:23:40.925 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=96efdef5-9b97-4501-bb1b-036f41987b5e 00:23:40.925 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:40.925 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:40.925 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:40.925 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_1 10 00:23:41.182 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c9a45882-a380-46be-a592-27e4b7d90a56 00:23:41.182 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c9a45882-a380-46be-a592-27e4b7d90a56:0 ' 00:23:41.182 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:41.182 17:01:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_2 10 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a7bea5fe-5127-4ad2-94de-04e6a71ff2ab 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a7bea5fe-5127-4ad2-94de-04e6a71ff2ab:1 ' 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_3 10 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=85d3643d-7f1e-4cf0-8871-3b3ff2211d04 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='85d3643d-7f1e-4cf0-8871-3b3ff2211d04:2 ' 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:41.746 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_4 10 00:23:42.031 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=81ca42a9-fb4f-43fa-b098-720f5371c76a 00:23:42.031 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='81ca42a9-fb4f-43fa-b098-720f5371c76a:3 ' 00:23:42.031 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:42.031 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_5 10 00:23:42.299 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fbca4a4b-5169-45f5-a8fe-bc3fd2d4e830 00:23:42.299 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fbca4a4b-5169-45f5-a8fe-bc3fd2d4e830:4 ' 00:23:42.299 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:42.299 17:01:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_6 10 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=51b0b2fc-d526-409f-a3ac-40e1c3f95156 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='51b0b2fc-d526-409f-a3ac-40e1c3f95156:5 ' 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_7 10 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4c1bf29d-d26f-4c65-9623-f10e55faa701 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4c1bf29d-d26f-4c65-9623-f10e55faa701:6 ' 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:42.863 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_8 10 00:23:43.122 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0d57f356-8b66-4fb6-ab47-66c2bd8dd3a1 00:23:43.122 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0d57f356-8b66-4fb6-ab47-66c2bd8dd3a1:7 ' 00:23:43.122 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:43.122 17:01:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_9 10 00:23:43.738 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=47794f5e-d792-40d6-a6ad-37f2f21793c5 00:23:43.738 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='47794f5e-d792-40d6-a6ad-37f2f21793c5:8 ' 00:23:43.738 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:43.738 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96efdef5-9b97-4501-bb1b-036f41987b5e lbd_10 10 00:23:43.996 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8ec0be5a-1e00-479d-b0ca-410a265b6a22 00:23:43.996 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8ec0be5a-1e00-479d-b0ca-410a265b6a22:9 ' 00:23:43.996 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias 'c9a45882-a380-46be-a592-27e4b7d90a56:0 a7bea5fe-5127-4ad2-94de-04e6a71ff2ab:1 85d3643d-7f1e-4cf0-8871-3b3ff2211d04:2 81ca42a9-fb4f-43fa-b098-720f5371c76a:3 fbca4a4b-5169-45f5-a8fe-bc3fd2d4e830:4 51b0b2fc-d526-409f-a3ac-40e1c3f95156:5 4c1bf29d-d26f-4c65-9623-f10e55faa701:6 0d57f356-8b66-4fb6-ab47-66c2bd8dd3a1:7 47794f5e-d792-40d6-a6ad-37f2f21793c5:8 8ec0be5a-1e00-479d-b0ca-410a265b6a22:9 ' 1:8 256 -d 00:23:44.253 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:44.254 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:23:44.254 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:23:44.512 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:23:44.512 17:01:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:44.769 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:23:44.769 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:23:45.028 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=9b04412e-590b-4da3-b733-92ca4c069194 00:23:45.028 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:45.028 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:45.028 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:45.028 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_1 10 00:23:45.285 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dac80a95-97d2-4171-bcca-69550dc40bc0 00:23:45.285 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dac80a95-97d2-4171-bcca-69550dc40bc0:0 ' 00:23:45.285 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:45.285 17:01:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_2 10 00:23:45.543 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=13644f67-b3fb-43ef-9dbf-05de1fccf77b 00:23:45.543 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='13644f67-b3fb-43ef-9dbf-05de1fccf77b:1 ' 00:23:45.543 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:45.543 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_3 10 00:23:45.799 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4ce50069-307b-426d-a3b2-bcf7e30f0175 00:23:45.799 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4ce50069-307b-426d-a3b2-bcf7e30f0175:2 ' 00:23:45.799 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:45.799 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_4 10 00:23:46.364 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=71d22744-7dd1-4edd-a4ec-04d4e1b05f62 00:23:46.364 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='71d22744-7dd1-4edd-a4ec-04d4e1b05f62:3 ' 00:23:46.365 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:46.365 17:01:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_5 10 00:23:46.623 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=41e1f60e-9033-4924-850e-74e3062b6e8c 00:23:46.623 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='41e1f60e-9033-4924-850e-74e3062b6e8c:4 ' 00:23:46.623 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:46.623 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_6 10 00:23:46.882 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fc1273b4-8b05-4bc6-8ef0-1bfb9aaf6663 00:23:46.882 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fc1273b4-8b05-4bc6-8ef0-1bfb9aaf6663:5 ' 00:23:46.883 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:46.883 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_7 10 00:23:47.140 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=df0f6dbf-c035-42d7-a87a-ed842db5f9fb 00:23:47.140 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='df0f6dbf-c035-42d7-a87a-ed842db5f9fb:6 ' 00:23:47.140 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:47.140 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_8 10 00:23:47.407 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=47a35bc4-cd20-4ad7-a1f1-fa4351905c1a 00:23:47.407 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='47a35bc4-cd20-4ad7-a1f1-fa4351905c1a:7 ' 00:23:47.407 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:47.407 17:01:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_9 10 00:23:47.673 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5af7666d-06d8-4e8c-9793-cb1625af6678 00:23:47.673 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5af7666d-06d8-4e8c-9793-cb1625af6678:8 ' 00:23:47.673 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:47.674 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b04412e-590b-4da3-b733-92ca4c069194 lbd_10 10 00:23:47.932 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=92b7b1e4-4ddb-4e71-a9a0-436553001933 00:23:47.932 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='92b7b1e4-4ddb-4e71-a9a0-436553001933:9 ' 00:23:47.932 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias 'dac80a95-97d2-4171-bcca-69550dc40bc0:0 13644f67-b3fb-43ef-9dbf-05de1fccf77b:1 4ce50069-307b-426d-a3b2-bcf7e30f0175:2 71d22744-7dd1-4edd-a4ec-04d4e1b05f62:3 41e1f60e-9033-4924-850e-74e3062b6e8c:4 fc1273b4-8b05-4bc6-8ef0-1bfb9aaf6663:5 df0f6dbf-c035-42d7-a87a-ed842db5f9fb:6 47a35bc4-cd20-4ad7-a1f1-fa4351905c1a:7 5af7666d-06d8-4e8c-9793-cb1625af6678:8 92b7b1e4-4ddb-4e71-a9a0-436553001933:9 ' 1:9 256 -d 00:23:48.190 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:48.190 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:23:48.190 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:23:48.449 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:23:48.449 17:01:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:49.014 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:23:49.014 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:23:49.014 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=86a7d16d-41da-4d0a-84ea-5c6828c8ab1c 00:23:49.014 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:49.014 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:49.272 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:49.272 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_1 10 00:23:49.539 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b848e8b1-23d3-41c6-a477-a25743b39fdc 00:23:49.539 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b848e8b1-23d3-41c6-a477-a25743b39fdc:0 ' 00:23:49.539 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:49.539 17:01:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_2 10 00:23:49.798 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3429d6f7-125a-46ed-b19a-d323b7d857c7 00:23:49.798 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3429d6f7-125a-46ed-b19a-d323b7d857c7:1 ' 00:23:49.798 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:49.798 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_3 10 00:23:50.057 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dc91d58b-469e-4441-8f00-d5e92f9902db 00:23:50.057 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dc91d58b-469e-4441-8f00-d5e92f9902db:2 ' 00:23:50.057 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:50.057 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_4 10 00:23:50.315 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6c587527-74ae-4613-94fd-3cf868857dd6 00:23:50.315 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6c587527-74ae-4613-94fd-3cf868857dd6:3 ' 00:23:50.315 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:50.315 17:01:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_5 10 00:23:50.572 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=deff845c-f8b1-476d-879d-5856e17b747c 00:23:50.572 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='deff845c-f8b1-476d-879d-5856e17b747c:4 ' 00:23:50.572 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:50.572 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_6 10 00:23:50.831 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=43302fc2-8e2f-4f18-a7d9-bc39db3ebf79 00:23:50.831 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='43302fc2-8e2f-4f18-a7d9-bc39db3ebf79:5 ' 00:23:50.831 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:50.831 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_7 10 00:23:51.089 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a809a92a-480c-476a-b170-9582f1c3d13b 00:23:51.089 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a809a92a-480c-476a-b170-9582f1c3d13b:6 ' 00:23:51.089 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:51.089 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_8 10 00:23:51.348 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=07f0429a-ef8a-4d4a-abf2-68a129fe4c50 00:23:51.348 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='07f0429a-ef8a-4d4a-abf2-68a129fe4c50:7 ' 00:23:51.348 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:51.348 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_9 10 00:23:51.610 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=68a891c5-73df-4f1c-9749-63dfce9794c3 00:23:51.610 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='68a891c5-73df-4f1c-9749-63dfce9794c3:8 ' 00:23:51.610 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:51.610 17:01:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86a7d16d-41da-4d0a-84ea-5c6828c8ab1c lbd_10 10 00:23:51.610 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e280f99b-3914-4896-9ccd-0be5c5ed270d 00:23:51.610 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e280f99b-3914-4896-9ccd-0be5c5ed270d:9 ' 00:23:51.610 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias 'b848e8b1-23d3-41c6-a477-a25743b39fdc:0 3429d6f7-125a-46ed-b19a-d323b7d857c7:1 dc91d58b-469e-4441-8f00-d5e92f9902db:2 6c587527-74ae-4613-94fd-3cf868857dd6:3 deff845c-f8b1-476d-879d-5856e17b747c:4 43302fc2-8e2f-4f18-a7d9-bc39db3ebf79:5 a809a92a-480c-476a-b170-9582f1c3d13b:6 07f0429a-ef8a-4d4a-abf2-68a129fe4c50:7 68a891c5-73df-4f1c-9749-63dfce9794c3:8 e280f99b-3914-4896-9ccd-0be5c5ed270d:9 ' 1:10 256 -d 00:23:52.174 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:52.174 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:23:52.174 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:23:52.174 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:23:52.174 17:01:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:52.741 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:23:52.741 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:23:52.998 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 00:23:52.998 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:52.998 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:52.998 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:52.998 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_1 10 00:23:53.256 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ead0fa27-402e-4448-86d7-64e942ae2b79 00:23:53.256 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ead0fa27-402e-4448-86d7-64e942ae2b79:0 ' 00:23:53.256 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:53.256 17:01:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_2 10 00:23:53.514 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3f10ee08-b525-442b-88f5-269a46063509 00:23:53.514 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3f10ee08-b525-442b-88f5-269a46063509:1 ' 00:23:53.514 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:53.514 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_3 10 00:23:53.771 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=66925da0-3573-4bc1-ac9f-ef0c684bab13 00:23:53.771 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='66925da0-3573-4bc1-ac9f-ef0c684bab13:2 ' 00:23:53.771 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:53.771 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_4 10 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=58cf9914-f871-43db-94ab-9e50deeee8ff 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='58cf9914-f871-43db-94ab-9e50deeee8ff:3 ' 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_5 10 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b57a2bd1-6441-483b-b470-12cd60948344 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b57a2bd1-6441-483b-b470-12cd60948344:4 ' 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:54.337 17:01:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_6 10 00:23:54.596 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7d439b6a-0d6f-4e18-9a77-14a48d0902be 00:23:54.596 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7d439b6a-0d6f-4e18-9a77-14a48d0902be:5 ' 00:23:54.596 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:54.596 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_7 10 00:23:54.878 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8dadaa53-b5fd-4bdb-a87e-ff68f1aa150d 00:23:54.878 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8dadaa53-b5fd-4bdb-a87e-ff68f1aa150d:6 ' 00:23:54.878 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:54.878 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_8 10 00:23:55.145 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=26abc3d6-4546-43e9-a4b1-c548ec40538a 00:23:55.145 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='26abc3d6-4546-43e9-a4b1-c548ec40538a:7 ' 00:23:55.145 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:55.145 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_9 10 00:23:55.403 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dbca788b-6d54-452d-b4c9-2891d89ea422 00:23:55.403 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dbca788b-6d54-452d-b4c9-2891d89ea422:8 ' 00:23:55.403 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:55.403 17:01:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d8e9ddf-4c73-40d9-8d96-f4cef7fc8461 lbd_10 10 00:23:55.662 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=be9e02f2-4763-497e-9ca4-32b1472e90b3 00:23:55.662 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='be9e02f2-4763-497e-9ca4-32b1472e90b3:9 ' 00:23:55.662 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias 'ead0fa27-402e-4448-86d7-64e942ae2b79:0 3f10ee08-b525-442b-88f5-269a46063509:1 66925da0-3573-4bc1-ac9f-ef0c684bab13:2 58cf9914-f871-43db-94ab-9e50deeee8ff:3 b57a2bd1-6441-483b-b470-12cd60948344:4 7d439b6a-0d6f-4e18-9a77-14a48d0902be:5 8dadaa53-b5fd-4bdb-a87e-ff68f1aa150d:6 26abc3d6-4546-43e9-a4b1-c548ec40538a:7 dbca788b-6d54-452d-b4c9-2891d89ea422:8 be9e02f2-4763-497e-9ca4-32b1472e90b3:9 ' 1:11 256 -d 00:23:55.919 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:23:55.919 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:23:55.919 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:23:56.176 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:23:56.176 17:01:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:23:56.742 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:23:56.742 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:23:57.001 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=4a697905-c93d-4769-8f67-b0e30a87b948 00:23:57.001 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:23:57.001 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:23:57.001 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:57.001 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_1 10 00:23:57.260 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f98f5fef-bb81-4115-9e8c-0b3844f605fe 00:23:57.260 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f98f5fef-bb81-4115-9e8c-0b3844f605fe:0 ' 00:23:57.260 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:57.260 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_2 10 00:23:57.518 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c8c12202-181e-4f1e-9bfd-a59ab00bc8f7 00:23:57.518 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c8c12202-181e-4f1e-9bfd-a59ab00bc8f7:1 ' 00:23:57.518 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:57.518 17:01:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_3 10 00:23:57.518 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=980f860f-6e4d-46b7-bfbb-f648cb15b9c6 00:23:57.518 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='980f860f-6e4d-46b7-bfbb-f648cb15b9c6:2 ' 00:23:57.518 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:57.518 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_4 10 00:23:57.777 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=72b8622e-a3ad-4942-b582-a32bd9ba9532 00:23:57.777 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='72b8622e-a3ad-4942-b582-a32bd9ba9532:3 ' 00:23:57.777 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:57.777 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_5 10 00:23:58.036 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c9361644-0de3-4476-bf91-274407b3b5eb 00:23:58.036 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c9361644-0de3-4476-bf91-274407b3b5eb:4 ' 00:23:58.036 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:58.036 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_6 10 00:23:58.294 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=808341bb-a44e-4b8b-94a7-cd58a17d9d5f 00:23:58.294 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='808341bb-a44e-4b8b-94a7-cd58a17d9d5f:5 ' 00:23:58.294 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:58.294 17:01:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_7 10 00:23:58.565 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2730f51b-ccbf-4786-8191-25874ae460e5 00:23:58.565 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2730f51b-ccbf-4786-8191-25874ae460e5:6 ' 00:23:58.565 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:58.565 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_8 10 00:23:58.831 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7362b252-d0d7-4db5-9d10-1b43bbdb7e6e 00:23:58.831 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7362b252-d0d7-4db5-9d10-1b43bbdb7e6e:7 ' 00:23:58.831 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:58.831 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_9 10 00:23:59.089 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3f96b4e5-ebdc-4133-af28-3a3735b83948 00:23:59.089 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3f96b4e5-ebdc-4133-af28-3a3735b83948:8 ' 00:23:59.089 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:23:59.089 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a697905-c93d-4769-8f67-b0e30a87b948 lbd_10 10 00:23:59.347 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dba64d1b-c00f-4b98-99df-47e1c8e60737 00:23:59.347 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dba64d1b-c00f-4b98-99df-47e1c8e60737:9 ' 00:23:59.347 17:02:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias 'f98f5fef-bb81-4115-9e8c-0b3844f605fe:0 c8c12202-181e-4f1e-9bfd-a59ab00bc8f7:1 980f860f-6e4d-46b7-bfbb-f648cb15b9c6:2 72b8622e-a3ad-4942-b582-a32bd9ba9532:3 c9361644-0de3-4476-bf91-274407b3b5eb:4 808341bb-a44e-4b8b-94a7-cd58a17d9d5f:5 2730f51b-ccbf-4786-8191-25874ae460e5:6 7362b252-d0d7-4db5-9d10-1b43bbdb7e6e:7 3f96b4e5-ebdc-4133-af28-3a3735b83948:8 dba64d1b-c00f-4b98-99df-47e1c8e60737:9 ' 1:12 256 -d 00:23:59.604 17:02:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:23:59.604 17:02:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.604 17:02:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:59.604 17:02:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:24:00.542 17:02:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:24:00.542 17:02:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.542 17:02:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:00.542 17:02:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:24:00.542 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:24:00.542 17:02:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:00.804 [2024-07-22 17:02:02.201464] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.211814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.219166] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.240219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.259366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.269927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.276741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.290458] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.295956] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.323916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.363964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.364782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.373848] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.382120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.404462] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.405625] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:00.804 [2024-07-22 17:02:02.414974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.460087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.464106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.506497] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.510352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.516532] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.529930] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.531065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.540909] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.561195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.563834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.572674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.590832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.612410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.628235] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.662941] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.668046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.063 [2024-07-22 17:02:02.675161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.693987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.755092] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.755688] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.759066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.766799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.787697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.790662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.792384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.825046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.843667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.910780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.922528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.321 [2024-07-22 17:02:02.930633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.579 [2024-07-22 17:02:03.085975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.579 [2024-07-22 17:02:03.136801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.579 [2024-07-22 17:02:03.157145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.855 [2024-07-22 17:02:03.328052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.855 [2024-07-22 17:02:03.358633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.855 [2024-07-22 17:02:03.358674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.855 [2024-07-22 17:02:03.394596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:01.855 [2024-07-22 17:02:03.448746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.473329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.488151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.498597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.522503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.525186] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.574478] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.577887] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.608033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.615768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.654786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.676760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.694114] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.125 [2024-07-22 17:02:03.726039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.739487] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.758108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.761946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.778574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.858980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.875144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.914782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.963132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.384 [2024-07-22 17:02:03.984093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.000365] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.015673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.015894] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.040496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.051619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.096628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.103984] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.115556] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.149001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.152841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.193293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.209582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.225751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.641 [2024-07-22 17:02:04.231037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.260110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.270337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.270353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.305828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.315160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.316916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.367445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 [2024-07-22 17:02:04.377656] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.898 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:24:02.898 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:24:02.899 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:24:02.899 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:02.899 [2024-07-22 17:02:04.468498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.899 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:03.157 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:24:03.157 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.157 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:03.157 17:02:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:24:03.157 [global] 00:24:03.157 thread=1 00:24:03.157 invalidate=1 00:24:03.157 rw=randwrite 00:24:03.157 time_based=1 00:24:03.157 runtime=10 00:24:03.157 ioengine=libaio 00:24:03.157 direct=1 00:24:03.157 bs=131072 00:24:03.157 iodepth=8 00:24:03.157 norandommap=0 00:24:03.157 numjobs=1 00:24:03.157 00:24:03.157 verify_dump=1 00:24:03.157 verify_backlog=512 00:24:03.157 verify_state_save=0 00:24:03.157 do_verify=1 00:24:03.157 verify=crc32c-intel 00:24:03.157 [job0] 00:24:03.157 filename=/dev/sdc 00:24:03.157 [job1] 00:24:03.157 filename=/dev/sdf 00:24:03.157 [job2] 00:24:03.157 filename=/dev/sdi 00:24:03.157 [job3] 00:24:03.157 filename=/dev/sdk 00:24:03.157 [job4] 00:24:03.157 filename=/dev/sdm 00:24:03.157 [job5] 00:24:03.157 filename=/dev/sds 00:24:03.157 [job6] 00:24:03.157 filename=/dev/sdy 00:24:03.157 [job7] 00:24:03.157 filename=/dev/sdad 00:24:03.157 [job8] 00:24:03.157 filename=/dev/sdaf 00:24:03.157 [job9] 00:24:03.157 filename=/dev/sdaj 00:24:03.157 [job10] 00:24:03.157 filename=/dev/sdd 00:24:03.157 [job11] 00:24:03.157 filename=/dev/sdj 00:24:03.157 [job12] 00:24:03.157 filename=/dev/sdn 00:24:03.157 [job13] 00:24:03.157 filename=/dev/sdq 00:24:03.157 [job14] 00:24:03.157 filename=/dev/sdv 00:24:03.157 [job15] 00:24:03.157 filename=/dev/sdz 00:24:03.157 [job16] 00:24:03.157 filename=/dev/sdac 00:24:03.157 [job17] 00:24:03.157 filename=/dev/sdag 00:24:03.157 [job18] 00:24:03.157 filename=/dev/sdam 00:24:03.157 [job19] 00:24:03.157 filename=/dev/sdan 00:24:03.157 [job20] 00:24:03.157 filename=/dev/sdg 00:24:03.157 [job21] 00:24:03.157 filename=/dev/sdp 00:24:03.157 [job22] 00:24:03.157 filename=/dev/sdu 00:24:03.157 [job23] 00:24:03.157 filename=/dev/sdw 00:24:03.157 [job24] 00:24:03.157 filename=/dev/sdab 00:24:03.157 [job25] 00:24:03.157 filename=/dev/sdah 00:24:03.157 [job26] 00:24:03.157 filename=/dev/sdal 00:24:03.157 [job27] 00:24:03.157 filename=/dev/sdap 00:24:03.157 [job28] 00:24:03.157 filename=/dev/sdar 00:24:03.157 [job29] 00:24:03.157 filename=/dev/sdau 00:24:03.157 [job30] 00:24:03.157 filename=/dev/sdae 00:24:03.157 [job31] 00:24:03.157 filename=/dev/sdai 00:24:03.157 [job32] 00:24:03.157 filename=/dev/sdak 00:24:03.157 [job33] 00:24:03.157 filename=/dev/sdao 00:24:03.157 [job34] 00:24:03.157 filename=/dev/sdaq 00:24:03.157 [job35] 00:24:03.157 filename=/dev/sdas 00:24:03.157 [job36] 00:24:03.157 filename=/dev/sdat 00:24:03.157 [job37] 00:24:03.157 filename=/dev/sdav 00:24:03.157 [job38] 00:24:03.157 filename=/dev/sdaw 00:24:03.157 [job39] 00:24:03.157 filename=/dev/sdax 00:24:03.157 [job40] 00:24:03.157 filename=/dev/sday 00:24:03.157 [job41] 00:24:03.157 filename=/dev/sdaz 00:24:03.157 [job42] 00:24:03.157 filename=/dev/sdbb 00:24:03.157 [job43] 00:24:03.157 filename=/dev/sdbc 00:24:03.157 [job44] 00:24:03.157 filename=/dev/sdbf 00:24:03.157 [job45] 00:24:03.157 filename=/dev/sdbg 00:24:03.157 [job46] 00:24:03.157 filename=/dev/sdbi 00:24:03.157 [job47] 00:24:03.157 filename=/dev/sdbl 00:24:03.157 [job48] 00:24:03.157 filename=/dev/sdbo 00:24:03.157 [job49] 00:24:03.157 filename=/dev/sdbr 00:24:03.157 [job50] 00:24:03.157 filename=/dev/sdba 00:24:03.157 [job51] 00:24:03.157 filename=/dev/sdbd 00:24:03.157 [job52] 00:24:03.157 filename=/dev/sdbe 00:24:03.157 [job53] 00:24:03.157 filename=/dev/sdbh 00:24:03.157 [job54] 00:24:03.157 filename=/dev/sdbk 00:24:03.157 [job55] 00:24:03.157 filename=/dev/sdbm 00:24:03.157 [job56] 00:24:03.157 filename=/dev/sdbp 00:24:03.157 [job57] 00:24:03.157 filename=/dev/sdbq 00:24:03.157 [job58] 00:24:03.157 filename=/dev/sdbt 00:24:03.157 [job59] 00:24:03.157 filename=/dev/sdbv 00:24:03.157 [job60] 00:24:03.157 filename=/dev/sdbj 00:24:03.157 [job61] 00:24:03.157 filename=/dev/sdbn 00:24:03.157 [job62] 00:24:03.157 filename=/dev/sdbs 00:24:03.157 [job63] 00:24:03.157 filename=/dev/sdbu 00:24:03.157 [job64] 00:24:03.157 filename=/dev/sdbw 00:24:03.157 [job65] 00:24:03.157 filename=/dev/sdbx 00:24:03.157 [job66] 00:24:03.157 filename=/dev/sdby 00:24:03.157 [job67] 00:24:03.157 filename=/dev/sdcb 00:24:03.157 [job68] 00:24:03.157 filename=/dev/sdce 00:24:03.157 [job69] 00:24:03.157 filename=/dev/sdcg 00:24:03.157 [job70] 00:24:03.157 filename=/dev/sdca 00:24:03.157 [job71] 00:24:03.157 filename=/dev/sdcc 00:24:03.157 [job72] 00:24:03.157 filename=/dev/sdcf 00:24:03.157 [job73] 00:24:03.157 filename=/dev/sdci 00:24:03.157 [job74] 00:24:03.157 filename=/dev/sdck 00:24:03.157 [job75] 00:24:03.157 filename=/dev/sdcm 00:24:03.157 [job76] 00:24:03.157 filename=/dev/sdcp 00:24:03.157 [job77] 00:24:03.157 filename=/dev/sdcr 00:24:03.157 [job78] 00:24:03.157 filename=/dev/sdct 00:24:03.415 [job79] 00:24:03.415 filename=/dev/sdcu 00:24:03.415 [job80] 00:24:03.415 filename=/dev/sdbz 00:24:03.415 [job81] 00:24:03.415 filename=/dev/sdcd 00:24:03.415 [job82] 00:24:03.415 filename=/dev/sdch 00:24:03.415 [job83] 00:24:03.415 filename=/dev/sdcj 00:24:03.415 [job84] 00:24:03.415 filename=/dev/sdcl 00:24:03.415 [job85] 00:24:03.415 filename=/dev/sdcn 00:24:03.415 [job86] 00:24:03.415 filename=/dev/sdco 00:24:03.415 [job87] 00:24:03.415 filename=/dev/sdcq 00:24:03.415 [job88] 00:24:03.415 filename=/dev/sdcs 00:24:03.415 [job89] 00:24:03.415 filename=/dev/sdcv 00:24:03.415 [job90] 00:24:03.415 filename=/dev/sda 00:24:03.415 [job91] 00:24:03.415 filename=/dev/sdb 00:24:03.415 [job92] 00:24:03.415 filename=/dev/sde 00:24:03.415 [job93] 00:24:03.415 filename=/dev/sdh 00:24:03.415 [job94] 00:24:03.415 filename=/dev/sdl 00:24:03.415 [job95] 00:24:03.415 filename=/dev/sdo 00:24:03.415 [job96] 00:24:03.415 filename=/dev/sdr 00:24:03.415 [job97] 00:24:03.415 filename=/dev/sdt 00:24:03.415 [job98] 00:24:03.415 filename=/dev/sdx 00:24:03.415 [job99] 00:24:03.415 filename=/dev/sdaa 00:24:04.788 queue_depth set to 113 (sdc) 00:24:04.788 queue_depth set to 113 (sdf) 00:24:04.788 queue_depth set to 113 (sdi) 00:24:04.788 queue_depth set to 113 (sdk) 00:24:04.788 queue_depth set to 113 (sdm) 00:24:04.788 queue_depth set to 113 (sds) 00:24:04.788 queue_depth set to 113 (sdy) 00:24:04.788 queue_depth set to 113 (sdad) 00:24:04.788 queue_depth set to 113 (sdaf) 00:24:04.788 queue_depth set to 113 (sdaj) 00:24:04.788 queue_depth set to 113 (sdd) 00:24:04.788 queue_depth set to 113 (sdj) 00:24:04.788 queue_depth set to 113 (sdn) 00:24:05.046 queue_depth set to 113 (sdq) 00:24:05.046 queue_depth set to 113 (sdv) 00:24:05.046 queue_depth set to 113 (sdz) 00:24:05.046 queue_depth set to 113 (sdac) 00:24:05.046 queue_depth set to 113 (sdag) 00:24:05.046 queue_depth set to 113 (sdam) 00:24:05.046 queue_depth set to 113 (sdan) 00:24:05.046 queue_depth set to 113 (sdg) 00:24:05.046 queue_depth set to 113 (sdp) 00:24:05.046 queue_depth set to 113 (sdu) 00:24:05.046 queue_depth set to 113 (sdw) 00:24:05.305 queue_depth set to 113 (sdab) 00:24:05.305 queue_depth set to 113 (sdah) 00:24:05.305 queue_depth set to 113 (sdal) 00:24:05.305 queue_depth set to 113 (sdap) 00:24:05.305 queue_depth set to 113 (sdar) 00:24:05.305 queue_depth set to 113 (sdau) 00:24:05.305 queue_depth set to 113 (sdae) 00:24:05.305 queue_depth set to 113 (sdai) 00:24:05.305 queue_depth set to 113 (sdak) 00:24:05.305 queue_depth set to 113 (sdao) 00:24:05.305 queue_depth set to 113 (sdaq) 00:24:05.305 queue_depth set to 113 (sdas) 00:24:05.563 queue_depth set to 113 (sdat) 00:24:05.563 queue_depth set to 113 (sdav) 00:24:05.563 queue_depth set to 113 (sdaw) 00:24:05.563 queue_depth set to 113 (sdax) 00:24:05.563 queue_depth set to 113 (sday) 00:24:05.563 queue_depth set to 113 (sdaz) 00:24:05.563 queue_depth set to 113 (sdbb) 00:24:05.563 queue_depth set to 113 (sdbc) 00:24:05.563 queue_depth set to 113 (sdbf) 00:24:05.563 queue_depth set to 113 (sdbg) 00:24:05.563 queue_depth set to 113 (sdbi) 00:24:05.821 queue_depth set to 113 (sdbl) 00:24:05.821 queue_depth set to 113 (sdbo) 00:24:05.821 queue_depth set to 113 (sdbr) 00:24:05.821 queue_depth set to 113 (sdba) 00:24:05.821 queue_depth set to 113 (sdbd) 00:24:05.821 queue_depth set to 113 (sdbe) 00:24:05.821 queue_depth set to 113 (sdbh) 00:24:05.821 queue_depth set to 113 (sdbk) 00:24:05.821 queue_depth set to 113 (sdbm) 00:24:05.821 queue_depth set to 113 (sdbp) 00:24:06.080 queue_depth set to 113 (sdbq) 00:24:06.080 queue_depth set to 113 (sdbt) 00:24:06.080 queue_depth set to 113 (sdbv) 00:24:06.080 queue_depth set to 113 (sdbj) 00:24:06.080 queue_depth set to 113 (sdbn) 00:24:06.080 queue_depth set to 113 (sdbs) 00:24:06.080 queue_depth set to 113 (sdbu) 00:24:06.080 queue_depth set to 113 (sdbw) 00:24:06.080 queue_depth set to 113 (sdbx) 00:24:06.080 queue_depth set to 113 (sdby) 00:24:06.080 queue_depth set to 113 (sdcb) 00:24:06.338 queue_depth set to 113 (sdce) 00:24:06.338 queue_depth set to 113 (sdcg) 00:24:06.338 queue_depth set to 113 (sdca) 00:24:06.338 queue_depth set to 113 (sdcc) 00:24:06.338 queue_depth set to 113 (sdcf) 00:24:06.338 queue_depth set to 113 (sdci) 00:24:06.338 queue_depth set to 113 (sdck) 00:24:06.338 queue_depth set to 113 (sdcm) 00:24:06.338 queue_depth set to 113 (sdcp) 00:24:06.338 queue_depth set to 113 (sdcr) 00:24:06.338 queue_depth set to 113 (sdct) 00:24:06.596 queue_depth set to 113 (sdcu) 00:24:06.596 queue_depth set to 113 (sdbz) 00:24:06.596 queue_depth set to 113 (sdcd) 00:24:06.596 queue_depth set to 113 (sdch) 00:24:06.596 queue_depth set to 113 (sdcj) 00:24:06.596 queue_depth set to 113 (sdcl) 00:24:06.596 queue_depth set to 113 (sdcn) 00:24:06.596 queue_depth set to 113 (sdco) 00:24:06.596 queue_depth set to 113 (sdcq) 00:24:06.596 queue_depth set to 113 (sdcs) 00:24:06.596 queue_depth set to 113 (sdcv) 00:24:06.854 queue_depth set to 113 (sda) 00:24:06.854 queue_depth set to 113 (sdb) 00:24:06.854 queue_depth set to 113 (sde) 00:24:06.854 queue_depth set to 113 (sdh) 00:24:06.854 queue_depth set to 113 (sdl) 00:24:06.854 queue_depth set to 113 (sdo) 00:24:06.854 queue_depth set to 113 (sdr) 00:24:06.854 queue_depth set to 113 (sdt) 00:24:06.854 queue_depth set to 113 (sdx) 00:24:06.854 queue_depth set to 113 (sdaa) 00:24:07.112 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.112 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.113 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:24:07.371 fio-3.35 00:24:07.371 Starting 100 threads 00:24:07.371 [2024-07-22 17:02:08.754793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.759081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.762356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.766624] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.769646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.772903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.776154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.779404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.781883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.785294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.788154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.790771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.793533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.795975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.798332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.801098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.803799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.806574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.809269] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.811952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.814717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.818201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.821270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.824224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.826554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.829346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.835060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.838652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.840892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.843799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.846435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.848652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.850615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.852652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.854731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.856878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.858885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.860945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.862844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.864883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.867390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.869450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.871589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.874433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.876528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.878912] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.881161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.883057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.885270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.887316] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.889737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.892074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.894511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.896821] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.899192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.901449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.903866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.906082] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.908729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.911147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.914029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.917767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.923103] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.925540] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.928944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.932202] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.934908] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.937932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.941495] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.945250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.947373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.949750] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.951843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.954405] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.956591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.959067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.961117] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.963668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.965736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.967934] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.970065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.972423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.974889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.977125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.979578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.371 [2024-07-22 17:02:08.982224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.984684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.987686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.989691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.991689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.993848] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.996488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:08.999419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.002633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.006652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.011696] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.014110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.016321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.018364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:07.629 [2024-07-22 17:02:09.020383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:13.443122] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:13.731744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:13.803234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:13.855219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:13.936805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:14.016794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:14.096593] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:14.227613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:14.365515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:12.901 [2024-07-22 17:02:14.460370] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.158 [2024-07-22 17:02:14.537916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.158 [2024-07-22 17:02:14.627376] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.158 [2024-07-22 17:02:14.714414] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.415 [2024-07-22 17:02:14.774612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.415 [2024-07-22 17:02:14.881626] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.415 [2024-07-22 17:02:14.958192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.415 [2024-07-22 17:02:15.008071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.673 [2024-07-22 17:02:15.063529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.673 [2024-07-22 17:02:15.118300] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.673 [2024-07-22 17:02:15.156379] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.673 [2024-07-22 17:02:15.213001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.673 [2024-07-22 17:02:15.262896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.932 [2024-07-22 17:02:15.304393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.932 [2024-07-22 17:02:15.360463] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:13.932 [2024-07-22 17:02:15.433313] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.190 [2024-07-22 17:02:15.585335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.190 [2024-07-22 17:02:15.643303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.190 [2024-07-22 17:02:15.730835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.190 [2024-07-22 17:02:15.801914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.449 [2024-07-22 17:02:15.894313] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.449 [2024-07-22 17:02:16.011258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.706 [2024-07-22 17:02:16.088610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.706 [2024-07-22 17:02:16.192668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.706 [2024-07-22 17:02:16.314664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.963 [2024-07-22 17:02:16.411219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.963 [2024-07-22 17:02:16.476020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:14.963 [2024-07-22 17:02:16.548360] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.220 [2024-07-22 17:02:16.668133] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.220 [2024-07-22 17:02:16.750993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.478 [2024-07-22 17:02:16.848662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.478 [2024-07-22 17:02:16.929557] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.478 [2024-07-22 17:02:16.987261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.478 [2024-07-22 17:02:17.027022] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.478 [2024-07-22 17:02:17.077341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.735 [2024-07-22 17:02:17.135609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.735 [2024-07-22 17:02:17.245639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.992 [2024-07-22 17:02:17.359341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:15.992 [2024-07-22 17:02:17.437649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.250 [2024-07-22 17:02:17.644419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.250 [2024-07-22 17:02:17.737964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.250 [2024-07-22 17:02:17.811340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.508 [2024-07-22 17:02:17.920846] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.508 [2024-07-22 17:02:18.056435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.766 [2024-07-22 17:02:18.173848] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:16.766 [2024-07-22 17:02:18.307985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.024 [2024-07-22 17:02:18.391445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.024 [2024-07-22 17:02:18.479770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.024 [2024-07-22 17:02:18.566962] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.282 [2024-07-22 17:02:18.641367] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.282 [2024-07-22 17:02:18.709734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.282 [2024-07-22 17:02:18.798668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.540 [2024-07-22 17:02:18.913641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.540 [2024-07-22 17:02:18.970675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.540 [2024-07-22 17:02:19.016129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.541 [2024-07-22 17:02:19.128573] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.798 [2024-07-22 17:02:19.217115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.798 [2024-07-22 17:02:19.296466] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.798 [2024-07-22 17:02:19.340355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:17.798 [2024-07-22 17:02:19.408473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.056 [2024-07-22 17:02:19.493555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.056 [2024-07-22 17:02:19.533575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.056 [2024-07-22 17:02:19.617036] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.312 [2024-07-22 17:02:19.755225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.312 [2024-07-22 17:02:19.825326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.312 [2024-07-22 17:02:19.906232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.570 [2024-07-22 17:02:20.023792] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.570 [2024-07-22 17:02:20.087778] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.570 [2024-07-22 17:02:20.157637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.828 [2024-07-22 17:02:20.247694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:18.828 [2024-07-22 17:02:20.344227] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.086 [2024-07-22 17:02:20.487826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.086 [2024-07-22 17:02:20.597638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.086 [2024-07-22 17:02:20.672060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.343 [2024-07-22 17:02:20.763905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.343 [2024-07-22 17:02:20.900113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.601 [2024-07-22 17:02:20.966181] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.601 [2024-07-22 17:02:21.021587] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.601 [2024-07-22 17:02:21.073601] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.601 [2024-07-22 17:02:21.194688] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.859 [2024-07-22 17:02:21.310488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.859 [2024-07-22 17:02:21.417716] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.859 [2024-07-22 17:02:21.470127] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.117 [2024-07-22 17:02:21.599638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.117 [2024-07-22 17:02:21.698885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.375 [2024-07-22 17:02:21.790146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.375 [2024-07-22 17:02:21.841931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.375 [2024-07-22 17:02:21.897121] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.375 [2024-07-22 17:02:21.954576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.633 [2024-07-22 17:02:22.070502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.633 [2024-07-22 17:02:22.159482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.633 [2024-07-22 17:02:22.230879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.892 [2024-07-22 17:02:22.272967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.892 [2024-07-22 17:02:22.389223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.892 [2024-07-22 17:02:22.433117] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:20.892 [2024-07-22 17:02:22.478217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.149 [2024-07-22 17:02:22.552931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.149 [2024-07-22 17:02:22.611309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.149 [2024-07-22 17:02:22.707131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.149 [2024-07-22 17:02:22.736758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.149 [2024-07-22 17:02:22.753380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.766392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.769035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.771902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.774602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.777099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.779494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.781989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.784603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.787341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.790019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.792734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.795586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.798131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.801658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.805744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 [2024-07-22 17:02:22.809852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.417 00:24:21.417 job0: (groupid=0, jobs=1): err= 0: pid=71024: Mon Jul 22 17:02:22 2024 00:24:21.417 read: IOPS=62, BW=7945KiB/s (8136kB/s)(60.0MiB/7733msec) 00:24:21.417 slat (usec): min=7, max=1115, avg=62.05, stdev=118.07 00:24:21.417 clat (msec): min=7, max=363, avg=37.20, stdev=57.41 00:24:21.417 lat (msec): min=7, max=363, avg=37.26, stdev=57.41 00:24:21.417 clat percentiles (msec): 00:24:21.417 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:24:21.417 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 23], 00:24:21.417 | 70.00th=[ 28], 80.00th=[ 44], 90.00th=[ 79], 95.00th=[ 129], 00:24:21.417 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:24:21.417 | 99.99th=[ 363] 00:24:21.417 write: IOPS=62, BW=7959KiB/s (8150kB/s)(61.0MiB/7848msec); 0 zone resets 00:24:21.417 slat (usec): min=32, max=1722, avg=139.39, stdev=184.23 00:24:21.417 clat (msec): min=25, max=470, avg=127.62, stdev=65.07 00:24:21.417 lat (msec): min=25, max=470, avg=127.76, stdev=65.10 00:24:21.417 clat percentiles (msec): 00:24:21.417 | 1.00th=[ 31], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 82], 00:24:21.417 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 102], 60.00th=[ 114], 00:24:21.417 | 70.00th=[ 138], 80.00th=[ 182], 90.00th=[ 215], 95.00th=[ 234], 00:24:21.417 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 472], 99.95th=[ 472], 00:24:21.417 | 99.99th=[ 472] 00:24:21.417 bw ( KiB/s): min= 2560, max=13029, per=0.76%, avg=7226.88, stdev=3417.93, samples=17 00:24:21.417 iops : min= 20, max= 101, avg=56.35, stdev=26.56, samples=17 00:24:21.417 lat (msec) : 10=10.85%, 20=15.91%, 50=14.98%, 100=28.10%, 250=26.76% 00:24:21.417 lat (msec) : 500=3.41% 00:24:21.417 cpu : usr=0.34%, sys=0.24%, ctx=1630, majf=0, minf=7 00:24:21.417 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.417 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.417 issued rwts: total=480,488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.417 job1: (groupid=0, jobs=1): err= 0: pid=71025: Mon Jul 22 17:02:22 2024 00:24:21.417 read: IOPS=60, BW=7787KiB/s (7974kB/s)(60.0MiB/7890msec) 00:24:21.417 slat (usec): min=6, max=1240, avg=68.21, stdev=132.66 00:24:21.417 clat (usec): min=6248, max=90026, avg=24829.86, stdev=14417.54 00:24:21.417 lat (usec): min=6341, max=90053, avg=24898.07, stdev=14424.90 00:24:21.417 clat percentiles (usec): 00:24:21.417 | 1.00th=[ 6652], 5.00th=[ 6980], 10.00th=[ 8094], 20.00th=[13566], 00:24:21.417 | 30.00th=[16319], 40.00th=[20055], 50.00th=[22938], 60.00th=[26084], 00:24:21.417 | 70.00th=[27919], 80.00th=[32113], 90.00th=[40109], 95.00th=[51119], 00:24:21.417 | 99.00th=[81265], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:24:21.417 | 99.99th=[89654] 00:24:21.417 write: IOPS=66, BW=8469KiB/s (8672kB/s)(70.9MiB/8570msec); 0 zone resets 00:24:21.417 slat (usec): min=33, max=2895, avg=137.60, stdev=229.61 00:24:21.417 clat (msec): min=45, max=441, avg=119.51, stdev=61.73 00:24:21.417 lat (msec): min=45, max=442, avg=119.65, stdev=61.75 00:24:21.417 clat percentiles (msec): 00:24:21.417 | 1.00th=[ 51], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 81], 00:24:21.417 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 106], 00:24:21.417 | 70.00th=[ 118], 80.00th=[ 146], 90.00th=[ 197], 95.00th=[ 241], 00:24:21.417 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 443], 00:24:21.417 | 99.99th=[ 443] 00:24:21.417 bw ( KiB/s): min= 1792, max=12774, per=0.75%, avg=7153.40, stdev=3936.59, samples=20 00:24:21.417 iops : min= 14, max= 99, avg=55.75, stdev=30.66, samples=20 00:24:21.417 lat (msec) : 10=5.16%, 20=13.09%, 50=25.41%, 100=31.14%, 250=22.64% 00:24:21.417 lat (msec) : 500=2.58% 00:24:21.417 cpu : usr=0.34%, sys=0.27%, ctx=1781, majf=0, minf=3 00:24:21.417 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.417 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.417 issued rwts: total=480,567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.417 job2: (groupid=0, jobs=1): err= 0: pid=71031: Mon Jul 22 17:02:22 2024 00:24:21.417 read: IOPS=61, BW=7857KiB/s (8045kB/s)(60.0MiB/7820msec) 00:24:21.417 slat (usec): min=6, max=1838, avg=69.30, stdev=156.42 00:24:21.417 clat (usec): min=6113, max=81311, avg=20493.63, stdev=12489.30 00:24:21.417 lat (usec): min=6141, max=81322, avg=20562.93, stdev=12493.00 00:24:21.417 clat percentiles (usec): 00:24:21.417 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[11207], 00:24:21.417 | 30.00th=[12387], 40.00th=[13960], 50.00th=[16712], 60.00th=[19268], 00:24:21.417 | 70.00th=[25035], 80.00th=[29230], 90.00th=[38011], 95.00th=[44827], 00:24:21.417 | 99.00th=[58983], 99.50th=[68682], 99.90th=[81265], 99.95th=[81265], 00:24:21.417 | 99.99th=[81265] 00:24:21.417 write: IOPS=60, BW=7775KiB/s (7961kB/s)(66.9MiB/8808msec); 0 zone resets 00:24:21.417 slat (usec): min=40, max=2164, avg=143.34, stdev=183.85 00:24:21.417 clat (msec): min=68, max=451, avg=130.59, stdev=63.87 00:24:21.417 lat (msec): min=69, max=451, avg=130.73, stdev=63.89 00:24:21.417 clat percentiles (msec): 00:24:21.417 | 1.00th=[ 71], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 82], 00:24:21.417 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 109], 60.00th=[ 127], 00:24:21.417 | 70.00th=[ 150], 80.00th=[ 176], 90.00th=[ 215], 95.00th=[ 245], 00:24:21.417 | 99.00th=[ 405], 99.50th=[ 443], 99.90th=[ 451], 99.95th=[ 451], 00:24:21.417 | 99.99th=[ 451] 00:24:21.418 bw ( KiB/s): min= 1277, max=12800, per=0.71%, avg=6755.05, stdev=3455.32, samples=20 00:24:21.418 iops : min= 9, max= 100, avg=52.50, stdev=27.02, samples=20 00:24:21.418 lat (msec) : 10=6.90%, 20=21.67%, 50=17.14%, 100=25.91%, 250=25.91% 00:24:21.418 lat (msec) : 500=2.46% 00:24:21.418 cpu : usr=0.34%, sys=0.27%, ctx=1760, majf=0, minf=1 00:24:21.418 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 issued rwts: total=480,535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.418 job3: (groupid=0, jobs=1): err= 0: pid=71036: Mon Jul 22 17:02:22 2024 00:24:21.418 read: IOPS=58, BW=7462KiB/s (7641kB/s)(62.1MiB/8525msec) 00:24:21.418 slat (usec): min=7, max=2497, avg=59.13, stdev=152.30 00:24:21.418 clat (msec): min=4, max=118, avg=16.82, stdev=13.68 00:24:21.418 lat (msec): min=4, max=118, avg=16.88, stdev=13.67 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 9], 00:24:21.418 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:24:21.418 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 40], 00:24:21.418 | 99.00th=[ 93], 99.50th=[ 102], 99.90th=[ 118], 99.95th=[ 118], 00:24:21.418 | 99.99th=[ 118] 00:24:21.418 write: IOPS=71, BW=9131KiB/s (9350kB/s)(80.0MiB/8972msec); 0 zone resets 00:24:21.418 slat (usec): min=38, max=4271, avg=148.31, stdev=287.87 00:24:21.418 clat (msec): min=11, max=347, avg=111.42, stdev=56.60 00:24:21.418 lat (msec): min=11, max=348, avg=111.56, stdev=56.66 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 14], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 78], 00:24:21.418 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 99], 00:24:21.418 | 70.00th=[ 110], 80.00th=[ 136], 90.00th=[ 215], 95.00th=[ 241], 00:24:21.418 | 99.00th=[ 279], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 347], 00:24:21.418 | 99.99th=[ 347] 00:24:21.418 bw ( KiB/s): min= 2048, max=15872, per=0.86%, avg=8189.65, stdev=3986.40, samples=20 00:24:21.418 iops : min= 16, max= 124, avg=63.90, stdev=31.13, samples=20 00:24:21.418 lat (msec) : 10=11.52%, 20=24.19%, 50=8.97%, 100=34.21%, 250=19.17% 00:24:21.418 lat (msec) : 500=1.93% 00:24:21.418 cpu : usr=0.40%, sys=0.32%, ctx=1802, majf=0, minf=5 00:24:21.418 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 issued rwts: total=497,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.418 job4: (groupid=0, jobs=1): err= 0: pid=71041: Mon Jul 22 17:02:22 2024 00:24:21.418 read: IOPS=55, BW=7161KiB/s (7333kB/s)(60.0MiB/8580msec) 00:24:21.418 slat (usec): min=8, max=1200, avg=65.83, stdev=131.90 00:24:21.418 clat (msec): min=12, max=238, avg=29.45, stdev=30.38 00:24:21.418 lat (msec): min=12, max=238, avg=29.52, stdev=30.38 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:24:21.418 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 24], 00:24:21.418 | 70.00th=[ 26], 80.00th=[ 32], 90.00th=[ 42], 95.00th=[ 71], 00:24:21.418 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 239], 99.95th=[ 239], 00:24:21.418 | 99.99th=[ 239] 00:24:21.418 write: IOPS=76, BW=9763KiB/s (9998kB/s)(79.2MiB/8312msec); 0 zone resets 00:24:21.418 slat (usec): min=34, max=1992, avg=125.11, stdev=150.75 00:24:21.418 clat (msec): min=7, max=409, avg=104.08, stdev=50.96 00:24:21.418 lat (msec): min=7, max=409, avg=104.21, stdev=50.96 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 20], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.418 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 95], 00:24:21.418 | 70.00th=[ 102], 80.00th=[ 118], 90.00th=[ 163], 95.00th=[ 218], 00:24:21.418 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 409], 99.95th=[ 409], 00:24:21.418 | 99.99th=[ 409] 00:24:21.418 bw ( KiB/s): min= 1792, max=15616, per=0.94%, avg=8913.89, stdev=3962.92, samples=18 00:24:21.418 iops : min= 14, max= 122, avg=69.56, stdev=30.92, samples=18 00:24:21.418 lat (msec) : 10=0.27%, 20=17.77%, 50=23.97%, 100=38.24%, 250=18.40% 00:24:21.418 lat (msec) : 500=1.35% 00:24:21.418 cpu : usr=0.40%, sys=0.28%, ctx=1837, majf=0, minf=3 00:24:21.418 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 issued rwts: total=480,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.418 job5: (groupid=0, jobs=1): err= 0: pid=71044: Mon Jul 22 17:02:22 2024 00:24:21.418 read: IOPS=70, BW=9062KiB/s (9279kB/s)(60.0MiB/6780msec) 00:24:21.418 slat (usec): min=6, max=5120, avg=80.17, stdev=303.77 00:24:21.418 clat (msec): min=4, max=137, avg=17.66, stdev=19.54 00:24:21.418 lat (msec): min=4, max=137, avg=17.74, stdev=19.52 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:24:21.418 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.418 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 26], 95.00th=[ 47], 00:24:21.418 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:24:21.418 | 99.99th=[ 138] 00:24:21.418 write: IOPS=54, BW=6975KiB/s (7142kB/s)(61.2MiB/8992msec); 0 zone resets 00:24:21.418 slat (usec): min=40, max=4557, avg=178.75, stdev=357.19 00:24:21.418 clat (msec): min=68, max=479, avg=146.09, stdev=75.07 00:24:21.418 lat (msec): min=68, max=479, avg=146.27, stdev=75.09 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 73], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 88], 00:24:21.418 | 30.00th=[ 93], 40.00th=[ 103], 50.00th=[ 117], 60.00th=[ 140], 00:24:21.418 | 70.00th=[ 176], 80.00th=[ 205], 90.00th=[ 232], 95.00th=[ 268], 00:24:21.418 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 481], 99.95th=[ 481], 00:24:21.418 | 99.99th=[ 481] 00:24:21.418 bw ( KiB/s): min= 510, max=11776, per=0.65%, avg=6166.60, stdev=3382.72, samples=20 00:24:21.418 iops : min= 3, max= 92, avg=47.90, stdev=26.57, samples=20 00:24:21.418 lat (msec) : 10=8.76%, 20=34.02%, 50=4.43%, 100=20.62%, 250=28.45% 00:24:21.418 lat (msec) : 500=3.71% 00:24:21.418 cpu : usr=0.42%, sys=0.17%, ctx=1646, majf=0, minf=7 00:24:21.418 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 issued rwts: total=480,490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.418 job6: (groupid=0, jobs=1): err= 0: pid=71046: Mon Jul 22 17:02:22 2024 00:24:21.418 read: IOPS=59, BW=7611KiB/s (7794kB/s)(60.0MiB/8072msec) 00:24:21.418 slat (usec): min=7, max=1693, avg=75.20, stdev=170.08 00:24:21.418 clat (msec): min=5, max=127, avg=27.71, stdev=18.02 00:24:21.418 lat (msec): min=5, max=127, avg=27.78, stdev=18.01 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 18], 00:24:21.418 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 27], 00:24:21.418 | 70.00th=[ 29], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 57], 00:24:21.418 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 128], 00:24:21.418 | 99.99th=[ 128] 00:24:21.418 write: IOPS=71, BW=9157KiB/s (9377kB/s)(74.9MiB/8373msec); 0 zone resets 00:24:21.418 slat (usec): min=38, max=3878, avg=137.90, stdev=224.03 00:24:21.418 clat (msec): min=63, max=439, avg=110.65, stdev=54.34 00:24:21.418 lat (msec): min=63, max=440, avg=110.78, stdev=54.35 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 79], 00:24:21.418 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 96], 00:24:21.418 | 70.00th=[ 109], 80.00th=[ 130], 90.00th=[ 182], 95.00th=[ 222], 00:24:21.418 | 99.00th=[ 342], 99.50th=[ 414], 99.90th=[ 439], 99.95th=[ 439], 00:24:21.418 | 99.99th=[ 439] 00:24:21.418 bw ( KiB/s): min= 1788, max=12544, per=0.84%, avg=7971.16, stdev=3627.68, samples=19 00:24:21.418 iops : min= 13, max= 98, avg=62.11, stdev=28.37, samples=19 00:24:21.418 lat (msec) : 10=2.78%, 20=10.29%, 50=28.36%, 100=38.00%, 250=18.54% 00:24:21.418 lat (msec) : 500=2.04% 00:24:21.418 cpu : usr=0.37%, sys=0.30%, ctx=1782, majf=0, minf=5 00:24:21.418 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.418 issued rwts: total=480,599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.418 job7: (groupid=0, jobs=1): err= 0: pid=71071: Mon Jul 22 17:02:22 2024 00:24:21.418 read: IOPS=57, BW=7381KiB/s (7558kB/s)(60.0MiB/8324msec) 00:24:21.418 slat (usec): min=6, max=2624, avg=87.98, stdev=217.76 00:24:21.418 clat (msec): min=15, max=115, avg=30.02, stdev=17.33 00:24:21.418 lat (msec): min=15, max=116, avg=30.11, stdev=17.32 00:24:21.418 clat percentiles (msec): 00:24:21.418 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 21], 00:24:21.418 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 25], 60.00th=[ 26], 00:24:21.418 | 70.00th=[ 29], 80.00th=[ 34], 90.00th=[ 51], 95.00th=[ 77], 00:24:21.418 | 99.00th=[ 101], 99.50th=[ 109], 99.90th=[ 116], 99.95th=[ 116], 00:24:21.418 | 99.99th=[ 116] 00:24:21.418 write: IOPS=75, BW=9653KiB/s (9885kB/s)(77.9MiB/8261msec); 0 zone resets 00:24:21.418 slat (usec): min=40, max=4116, avg=130.03, stdev=216.99 00:24:21.418 clat (msec): min=38, max=434, avg=105.10, stdev=51.43 00:24:21.418 lat (msec): min=38, max=434, avg=105.23, stdev=51.43 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 45], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 78], 00:24:21.419 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 94], 00:24:21.419 | 70.00th=[ 103], 80.00th=[ 114], 90.00th=[ 142], 95.00th=[ 220], 00:24:21.419 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 435], 99.95th=[ 435], 00:24:21.419 | 99.99th=[ 435] 00:24:21.419 bw ( KiB/s): min= 768, max=12774, per=0.87%, avg=8281.58, stdev=4013.45, samples=19 00:24:21.419 iops : min= 6, max= 99, avg=64.47, stdev=31.33, samples=19 00:24:21.419 lat (msec) : 20=8.25%, 50=31.64%, 100=41.34%, 250=16.95%, 500=1.81% 00:24:21.419 cpu : usr=0.41%, sys=0.26%, ctx=1855, majf=0, minf=5 00:24:21.419 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 issued rwts: total=480,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.419 job8: (groupid=0, jobs=1): err= 0: pid=71200: Mon Jul 22 17:02:22 2024 00:24:21.419 read: IOPS=56, BW=7244KiB/s (7417kB/s)(60.0MiB/8482msec) 00:24:21.419 slat (usec): min=7, max=1288, avg=64.04, stdev=122.60 00:24:21.419 clat (msec): min=6, max=142, avg=20.08, stdev=14.67 00:24:21.419 lat (msec): min=6, max=142, avg=20.15, stdev=14.68 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:24:21.419 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:24:21.419 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 34], 95.00th=[ 45], 00:24:21.419 | 99.00th=[ 89], 99.50th=[ 105], 99.90th=[ 144], 99.95th=[ 144], 00:24:21.419 | 99.99th=[ 144] 00:24:21.419 write: IOPS=71, BW=9175KiB/s (9395kB/s)(79.4MiB/8859msec); 0 zone resets 00:24:21.419 slat (usec): min=41, max=2525, avg=142.74, stdev=206.24 00:24:21.419 clat (msec): min=13, max=463, avg=110.86, stdev=62.13 00:24:21.419 lat (msec): min=14, max=463, avg=111.00, stdev=62.14 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 18], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.419 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 97], 00:24:21.419 | 70.00th=[ 108], 80.00th=[ 124], 90.00th=[ 186], 95.00th=[ 226], 00:24:21.419 | 99.00th=[ 422], 99.50th=[ 456], 99.90th=[ 464], 99.95th=[ 464], 00:24:21.419 | 99.99th=[ 464] 00:24:21.419 bw ( KiB/s): min= 256, max=14848, per=0.89%, avg=8446.68, stdev=3989.40, samples=19 00:24:21.419 iops : min= 2, max= 116, avg=65.95, stdev=31.13, samples=19 00:24:21.419 lat (msec) : 10=2.78%, 20=30.40%, 50=9.69%, 100=35.52%, 250=19.10% 00:24:21.419 lat (msec) : 500=2.51% 00:24:21.419 cpu : usr=0.47%, sys=0.22%, ctx=1891, majf=0, minf=5 00:24:21.419 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 issued rwts: total=480,635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.419 job9: (groupid=0, jobs=1): err= 0: pid=71334: Mon Jul 22 17:02:22 2024 00:24:21.419 read: IOPS=58, BW=7487KiB/s (7667kB/s)(60.0MiB/8206msec) 00:24:21.419 slat (usec): min=6, max=2889, avg=80.65, stdev=222.03 00:24:21.419 clat (usec): min=15514, max=61680, avg=26998.92, stdev=8776.00 00:24:21.419 lat (usec): min=15569, max=61690, avg=27079.57, stdev=8779.35 00:24:21.419 clat percentiles (usec): 00:24:21.419 | 1.00th=[15664], 5.00th=[16319], 10.00th=[17171], 20.00th=[19530], 00:24:21.419 | 30.00th=[21365], 40.00th=[23200], 50.00th=[25297], 60.00th=[27657], 00:24:21.419 | 70.00th=[30278], 80.00th=[33424], 90.00th=[38011], 95.00th=[41157], 00:24:21.419 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:24:21.419 | 99.99th=[61604] 00:24:21.419 write: IOPS=74, BW=9564KiB/s (9793kB/s)(78.8MiB/8432msec); 0 zone resets 00:24:21.419 slat (usec): min=39, max=2098, avg=135.38, stdev=173.27 00:24:21.419 clat (msec): min=54, max=461, avg=106.04, stdev=51.41 00:24:21.419 lat (msec): min=54, max=461, avg=106.18, stdev=51.41 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 62], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 78], 00:24:21.419 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 95], 00:24:21.419 | 70.00th=[ 104], 80.00th=[ 118], 90.00th=[ 146], 95.00th=[ 220], 00:24:21.419 | 99.00th=[ 317], 99.50th=[ 405], 99.90th=[ 460], 99.95th=[ 460], 00:24:21.419 | 99.99th=[ 460] 00:24:21.419 bw ( KiB/s): min= 768, max=12800, per=0.88%, avg=8379.79, stdev=3837.17, samples=19 00:24:21.419 iops : min= 6, max= 100, avg=65.37, stdev=29.99, samples=19 00:24:21.419 lat (msec) : 20=10.09%, 50=31.98%, 100=38.56%, 250=17.75%, 500=1.62% 00:24:21.419 cpu : usr=0.37%, sys=0.31%, ctx=1853, majf=0, minf=1 00:24:21.419 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 issued rwts: total=480,630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.419 job10: (groupid=0, jobs=1): err= 0: pid=71353: Mon Jul 22 17:02:22 2024 00:24:21.419 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8718msec) 00:24:21.419 slat (usec): min=6, max=2861, avg=64.01, stdev=147.38 00:24:21.419 clat (msec): min=3, max=189, avg=17.49, stdev=20.02 00:24:21.419 lat (msec): min=3, max=189, avg=17.56, stdev=20.02 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.419 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:24:21.419 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 47], 00:24:21.419 | 99.00th=[ 82], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:24:21.419 | 99.99th=[ 190] 00:24:21.419 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(104MiB/8285msec); 0 zone resets 00:24:21.419 slat (usec): min=31, max=2497, avg=136.47, stdev=190.60 00:24:21.419 clat (msec): min=34, max=281, avg=78.58, stdev=38.08 00:24:21.419 lat (msec): min=34, max=281, avg=78.71, stdev=38.09 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 55], 00:24:21.419 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 71], 00:24:21.419 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 124], 95.00th=[ 157], 00:24:21.419 | 99.00th=[ 247], 99.50th=[ 257], 99.90th=[ 284], 99.95th=[ 284], 00:24:21.419 | 99.99th=[ 284] 00:24:21.419 bw ( KiB/s): min= 512, max=18139, per=1.11%, avg=10582.20, stdev=5435.33, samples=20 00:24:21.419 iops : min= 4, max= 141, avg=82.50, stdev=42.46, samples=20 00:24:21.419 lat (msec) : 4=0.06%, 10=15.06%, 20=24.60%, 50=9.36%, 100=42.11% 00:24:21.419 lat (msec) : 250=8.45%, 500=0.37% 00:24:21.419 cpu : usr=0.62%, sys=0.37%, ctx=2656, majf=0, minf=3 00:24:21.419 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 issued rwts: total=800,834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.419 job11: (groupid=0, jobs=1): err= 0: pid=71503: Mon Jul 22 17:02:22 2024 00:24:21.419 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8750msec) 00:24:21.419 slat (usec): min=6, max=1179, avg=68.24, stdev=141.56 00:24:21.419 clat (usec): min=5513, max=68033, avg=15950.99, stdev=8336.78 00:24:21.419 lat (usec): min=5536, max=68082, avg=16019.23, stdev=8348.35 00:24:21.419 clat percentiles (usec): 00:24:21.419 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 9896], 00:24:21.419 | 30.00th=[11863], 40.00th=[13042], 50.00th=[14091], 60.00th=[15533], 00:24:21.419 | 70.00th=[17433], 80.00th=[20317], 90.00th=[24773], 95.00th=[31065], 00:24:21.419 | 99.00th=[53216], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:24:21.419 | 99.99th=[67634] 00:24:21.419 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(110MiB/8441msec); 0 zone resets 00:24:21.419 slat (usec): min=40, max=2983, avg=139.45, stdev=206.44 00:24:21.419 clat (msec): min=10, max=279, avg=76.39, stdev=34.32 00:24:21.419 lat (msec): min=10, max=280, avg=76.53, stdev=34.32 00:24:21.419 clat percentiles (msec): 00:24:21.419 | 1.00th=[ 42], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 56], 00:24:21.419 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 70], 00:24:21.419 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 123], 95.00th=[ 155], 00:24:21.419 | 99.00th=[ 203], 99.50th=[ 224], 99.90th=[ 279], 99.95th=[ 279], 00:24:21.419 | 99.99th=[ 279] 00:24:21.419 bw ( KiB/s): min= 1792, max=18688, per=1.17%, avg=11122.90, stdev=5555.14, samples=20 00:24:21.419 iops : min= 14, max= 146, avg=86.80, stdev=43.43, samples=20 00:24:21.419 lat (msec) : 10=9.84%, 20=28.28%, 50=11.46%, 100=42.84%, 250=7.34% 00:24:21.419 lat (msec) : 500=0.24% 00:24:21.419 cpu : usr=0.63%, sys=0.39%, ctx=2670, majf=0, minf=5 00:24:21.419 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.419 issued rwts: total=800,876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.419 job12: (groupid=0, jobs=1): err= 0: pid=71616: Mon Jul 22 17:02:22 2024 00:24:21.419 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(96.2MiB/8552msec) 00:24:21.419 slat (usec): min=6, max=2773, avg=60.78, stdev=146.12 00:24:21.419 clat (msec): min=3, max=152, avg=16.13, stdev=16.41 00:24:21.419 lat (msec): min=3, max=152, avg=16.19, stdev=16.40 00:24:21.419 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:24:21.420 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.420 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 24], 95.00th=[ 38], 00:24:21.420 | 99.00th=[ 113], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:24:21.420 | 99.99th=[ 153] 00:24:21.420 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8443msec); 0 zone resets 00:24:21.420 slat (usec): min=37, max=4929, avg=143.60, stdev=256.68 00:24:21.420 clat (msec): min=42, max=388, avg=83.77, stdev=42.20 00:24:21.420 lat (msec): min=42, max=389, avg=83.91, stdev=42.20 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 56], 00:24:21.420 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 80], 00:24:21.420 | 70.00th=[ 91], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 142], 00:24:21.420 | 99.00th=[ 271], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 388], 00:24:21.420 | 99.99th=[ 388] 00:24:21.420 bw ( KiB/s): min= 766, max=18432, per=1.13%, avg=10778.05, stdev=4952.54, samples=19 00:24:21.420 iops : min= 5, max= 144, avg=84.00, stdev=38.91, samples=19 00:24:21.420 lat (msec) : 4=0.06%, 10=15.16%, 20=26.94%, 50=7.45%, 100=38.15% 00:24:21.420 lat (msec) : 250=11.66%, 500=0.57% 00:24:21.420 cpu : usr=0.63%, sys=0.32%, ctx=2488, majf=0, minf=5 00:24:21.420 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 issued rwts: total=770,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.420 job13: (groupid=0, jobs=1): err= 0: pid=71679: Mon Jul 22 17:02:22 2024 00:24:21.420 read: IOPS=77, BW=9899KiB/s (10.1MB/s)(80.0MiB/8276msec) 00:24:21.420 slat (usec): min=7, max=854, avg=57.96, stdev=108.23 00:24:21.420 clat (msec): min=3, max=193, avg=19.86, stdev=28.87 00:24:21.420 lat (msec): min=3, max=193, avg=19.92, stdev=28.88 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.420 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:24:21.420 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 31], 95.00th=[ 78], 00:24:21.420 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 194], 00:24:21.420 | 99.99th=[ 194] 00:24:21.420 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(93.0MiB/8437msec); 0 zone resets 00:24:21.420 slat (usec): min=38, max=3031, avg=147.47, stdev=224.59 00:24:21.420 clat (msec): min=42, max=208, avg=90.11, stdev=31.50 00:24:21.420 lat (msec): min=42, max=208, avg=90.26, stdev=31.51 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 62], 00:24:21.420 | 30.00th=[ 66], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 94], 00:24:21.420 | 70.00th=[ 105], 80.00th=[ 120], 90.00th=[ 138], 95.00th=[ 148], 00:24:21.420 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:24:21.420 | 99.99th=[ 209] 00:24:21.420 bw ( KiB/s): min= 1280, max=15872, per=0.99%, avg=9433.60, stdev=4172.45, samples=20 00:24:21.420 iops : min= 10, max= 124, avg=73.60, stdev=32.67, samples=20 00:24:21.420 lat (msec) : 4=0.14%, 10=18.42%, 20=19.65%, 50=4.99%, 100=36.92% 00:24:21.420 lat (msec) : 250=19.87% 00:24:21.420 cpu : usr=0.46%, sys=0.36%, ctx=2318, majf=0, minf=3 00:24:21.420 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 issued rwts: total=640,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.420 job14: (groupid=0, jobs=1): err= 0: pid=71680: Mon Jul 22 17:02:22 2024 00:24:21.420 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8780msec) 00:24:21.420 slat (usec): min=7, max=1213, avg=57.90, stdev=105.17 00:24:21.420 clat (usec): min=4181, max=76016, avg=15116.55, stdev=7977.73 00:24:21.420 lat (usec): min=4198, max=76032, avg=15174.45, stdev=7976.71 00:24:21.420 clat percentiles (usec): 00:24:21.420 | 1.00th=[ 6456], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[10159], 00:24:21.420 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13042], 60.00th=[14615], 00:24:21.420 | 70.00th=[16188], 80.00th=[18220], 90.00th=[22938], 95.00th=[27919], 00:24:21.420 | 99.00th=[43779], 99.50th=[65799], 99.90th=[76022], 99.95th=[76022], 00:24:21.420 | 99.99th=[76022] 00:24:21.420 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(108MiB/8529msec); 0 zone resets 00:24:21.420 slat (usec): min=39, max=3768, avg=147.21, stdev=257.78 00:24:21.420 clat (msec): min=22, max=308, avg=78.44, stdev=38.26 00:24:21.420 lat (msec): min=23, max=308, avg=78.59, stdev=38.28 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 55], 00:24:21.420 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 70], 00:24:21.420 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 128], 95.00th=[ 161], 00:24:21.420 | 99.00th=[ 218], 99.50th=[ 264], 99.90th=[ 309], 99.95th=[ 309], 00:24:21.420 | 99.99th=[ 309] 00:24:21.420 bw ( KiB/s): min= 2816, max=18176, per=1.15%, avg=10931.50, stdev=5636.21, samples=20 00:24:21.420 iops : min= 22, max= 142, avg=85.35, stdev=44.01, samples=20 00:24:21.420 lat (msec) : 10=8.79%, 20=32.57%, 50=8.07%, 100=41.36%, 250=8.79% 00:24:21.420 lat (msec) : 500=0.42% 00:24:21.420 cpu : usr=0.57%, sys=0.40%, ctx=2746, majf=0, minf=5 00:24:21.420 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 issued rwts: total=800,861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.420 job15: (groupid=0, jobs=1): err= 0: pid=71681: Mon Jul 22 17:02:22 2024 00:24:21.420 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(100MiB/8886msec) 00:24:21.420 slat (usec): min=6, max=1354, avg=48.14, stdev=95.44 00:24:21.420 clat (usec): min=4241, max=42149, avg=11562.76, stdev=5358.26 00:24:21.420 lat (usec): min=4289, max=42158, avg=11610.90, stdev=5359.59 00:24:21.420 clat percentiles (usec): 00:24:21.420 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 6521], 20.00th=[ 7439], 00:24:21.420 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11338], 00:24:21.420 | 70.00th=[12911], 80.00th=[15139], 90.00th=[17433], 95.00th=[22676], 00:24:21.420 | 99.00th=[28967], 99.50th=[33424], 99.90th=[42206], 99.95th=[42206], 00:24:21.420 | 99.99th=[42206] 00:24:21.420 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(110MiB/8894msec); 0 zone resets 00:24:21.420 slat (usec): min=38, max=9610, avg=155.28, stdev=386.55 00:24:21.420 clat (msec): min=2, max=322, avg=80.04, stdev=40.51 00:24:21.420 lat (msec): min=2, max=322, avg=80.19, stdev=40.49 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 57], 00:24:21.420 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.420 | 70.00th=[ 81], 80.00th=[ 97], 90.00th=[ 127], 95.00th=[ 153], 00:24:21.420 | 99.00th=[ 253], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:24:21.420 | 99.99th=[ 321] 00:24:21.420 bw ( KiB/s): min= 2816, max=18212, per=1.17%, avg=11173.85, stdev=5244.36, samples=20 00:24:21.420 iops : min= 22, max= 142, avg=87.15, stdev=41.06, samples=20 00:24:21.420 lat (msec) : 4=0.18%, 10=23.02%, 20=20.58%, 50=6.60%, 100=39.50% 00:24:21.420 lat (msec) : 250=9.46%, 500=0.65% 00:24:21.420 cpu : usr=0.65%, sys=0.38%, ctx=2576, majf=0, minf=3 00:24:21.420 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 issued rwts: total=800,881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.420 job16: (groupid=0, jobs=1): err= 0: pid=71682: Mon Jul 22 17:02:22 2024 00:24:21.420 read: IOPS=73, BW=9403KiB/s (9629kB/s)(80.0MiB/8712msec) 00:24:21.420 slat (usec): min=8, max=2483, avg=74.66, stdev=170.25 00:24:21.420 clat (msec): min=3, max=193, avg=23.52, stdev=25.94 00:24:21.420 lat (msec): min=3, max=193, avg=23.59, stdev=25.95 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:24:21.420 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:24:21.420 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 39], 95.00th=[ 70], 00:24:21.420 | 99.00th=[ 132], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 194], 00:24:21.420 | 99.99th=[ 194] 00:24:21.420 write: IOPS=91, BW=11.5MiB/s (12.1MB/s)(93.5MiB/8136msec); 0 zone resets 00:24:21.420 slat (usec): min=40, max=3045, avg=147.77, stdev=261.69 00:24:21.420 clat (msec): min=28, max=261, avg=85.35, stdev=33.93 00:24:21.420 lat (msec): min=28, max=261, avg=85.49, stdev=33.96 00:24:21.420 clat percentiles (msec): 00:24:21.420 | 1.00th=[ 35], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 60], 00:24:21.420 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 86], 00:24:21.420 | 70.00th=[ 95], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 148], 00:24:21.420 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 262], 99.95th=[ 262], 00:24:21.420 | 99.99th=[ 262] 00:24:21.420 bw ( KiB/s): min= 512, max=17408, per=1.00%, avg=9482.10, stdev=5183.47, samples=20 00:24:21.420 iops : min= 4, max= 136, avg=73.90, stdev=40.54, samples=20 00:24:21.420 lat (msec) : 4=0.29%, 10=6.63%, 20=23.99%, 50=13.90%, 100=39.12% 00:24:21.420 lat (msec) : 250=15.99%, 500=0.07% 00:24:21.420 cpu : usr=0.57%, sys=0.31%, ctx=2245, majf=0, minf=1 00:24:21.420 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 issued rwts: total=640,748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.421 job17: (groupid=0, jobs=1): err= 0: pid=71683: Mon Jul 22 17:02:22 2024 00:24:21.421 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8791msec) 00:24:21.421 slat (usec): min=6, max=1083, avg=56.81, stdev=96.92 00:24:21.421 clat (usec): min=5898, max=36609, avg=14230.76, stdev=4718.29 00:24:21.421 lat (usec): min=6104, max=36624, avg=14287.56, stdev=4726.83 00:24:21.421 clat percentiles (usec): 00:24:21.421 | 1.00th=[ 6652], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10421], 00:24:21.421 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13698], 60.00th=[15139], 00:24:21.421 | 70.00th=[15926], 80.00th=[17433], 90.00th=[20317], 95.00th=[22414], 00:24:21.421 | 99.00th=[28967], 99.50th=[32637], 99.90th=[36439], 99.95th=[36439], 00:24:21.421 | 99.99th=[36439] 00:24:21.421 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(113MiB/8597msec); 0 zone resets 00:24:21.421 slat (usec): min=35, max=4028, avg=129.45, stdev=232.95 00:24:21.421 clat (msec): min=43, max=227, avg=75.70, stdev=30.87 00:24:21.421 lat (msec): min=43, max=227, avg=75.83, stdev=30.86 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 54], 00:24:21.421 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 69], 00:24:21.421 | 70.00th=[ 78], 80.00th=[ 95], 90.00th=[ 117], 95.00th=[ 150], 00:24:21.421 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 228], 99.95th=[ 228], 00:24:21.421 | 99.99th=[ 228] 00:24:21.421 bw ( KiB/s): min= 2560, max=18432, per=1.20%, avg=11430.15, stdev=5380.76, samples=20 00:24:21.421 iops : min= 20, max= 144, avg=89.20, stdev=42.09, samples=20 00:24:21.421 lat (msec) : 10=7.76%, 20=34.10%, 50=7.52%, 100=41.62%, 250=8.99% 00:24:21.421 cpu : usr=0.63%, sys=0.37%, ctx=2723, majf=0, minf=3 00:24:21.421 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 issued rwts: total=800,901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.421 job18: (groupid=0, jobs=1): err= 0: pid=71684: Mon Jul 22 17:02:22 2024 00:24:21.421 read: IOPS=75, BW=9700KiB/s (9933kB/s)(80.0MiB/8445msec) 00:24:21.421 slat (usec): min=6, max=1059, avg=59.24, stdev=104.91 00:24:21.421 clat (msec): min=5, max=235, avg=19.61, stdev=23.96 00:24:21.421 lat (msec): min=5, max=235, avg=19.67, stdev=23.96 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:24:21.421 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:24:21.421 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 27], 95.00th=[ 52], 00:24:21.421 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 236], 99.95th=[ 236], 00:24:21.421 | 99.99th=[ 236] 00:24:21.421 write: IOPS=94, BW=11.8MiB/s (12.3MB/s)(99.4MiB/8449msec); 0 zone resets 00:24:21.421 slat (usec): min=39, max=1412, avg=126.14, stdev=151.20 00:24:21.421 clat (msec): min=48, max=278, avg=84.34, stdev=33.52 00:24:21.421 lat (msec): min=48, max=278, avg=84.46, stdev=33.53 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 58], 00:24:21.421 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 84], 00:24:21.421 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 129], 95.00th=[ 146], 00:24:21.421 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 279], 99.95th=[ 279], 00:24:21.421 | 99.99th=[ 279] 00:24:21.421 bw ( KiB/s): min= 1792, max=17408, per=1.06%, avg=10085.85, stdev=4797.67, samples=20 00:24:21.421 iops : min= 14, max= 136, avg=78.70, stdev=37.48, samples=20 00:24:21.421 lat (msec) : 10=6.69%, 20=28.50%, 50=8.64%, 100=40.84%, 250=15.12% 00:24:21.421 lat (msec) : 500=0.21% 00:24:21.421 cpu : usr=0.60%, sys=0.23%, ctx=2408, majf=0, minf=3 00:24:21.421 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 issued rwts: total=640,795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.421 job19: (groupid=0, jobs=1): err= 0: pid=71685: Mon Jul 22 17:02:22 2024 00:24:21.421 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(80.0MiB/7649msec) 00:24:21.421 slat (usec): min=6, max=1524, avg=59.95, stdev=120.13 00:24:21.421 clat (msec): min=3, max=290, avg=17.97, stdev=32.96 00:24:21.421 lat (msec): min=3, max=290, avg=18.03, stdev=32.96 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.421 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:24:21.421 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 22], 95.00th=[ 47], 00:24:21.421 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:24:21.421 | 99.99th=[ 292] 00:24:21.421 write: IOPS=87, BW=10.9MiB/s (11.4MB/s)(93.6MiB/8582msec); 0 zone resets 00:24:21.421 slat (usec): min=36, max=3441, avg=130.51, stdev=214.08 00:24:21.421 clat (msec): min=47, max=394, avg=91.17, stdev=45.54 00:24:21.421 lat (msec): min=47, max=394, avg=91.30, stdev=45.55 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 61], 00:24:21.421 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 88], 00:24:21.421 | 70.00th=[ 101], 80.00th=[ 118], 90.00th=[ 138], 95.00th=[ 159], 00:24:21.421 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 393], 99.95th=[ 393], 00:24:21.421 | 99.99th=[ 393] 00:24:21.421 bw ( KiB/s): min= 2048, max=17408, per=1.05%, avg=9992.11, stdev=4461.75, samples=19 00:24:21.421 iops : min= 16, max= 136, avg=77.84, stdev=34.88, samples=19 00:24:21.421 lat (msec) : 4=0.14%, 10=19.01%, 20=21.81%, 50=3.60%, 100=38.16% 00:24:21.421 lat (msec) : 250=15.77%, 500=1.51% 00:24:21.421 cpu : usr=0.59%, sys=0.25%, ctx=2181, majf=0, minf=3 00:24:21.421 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 issued rwts: total=640,749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.421 job20: (groupid=0, jobs=1): err= 0: pid=71686: Mon Jul 22 17:02:22 2024 00:24:21.421 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(100MiB/9013msec) 00:24:21.421 slat (usec): min=6, max=5309, avg=47.20, stdev=197.06 00:24:21.421 clat (usec): min=4038, max=83190, avg=12417.90, stdev=9863.52 00:24:21.421 lat (usec): min=4057, max=83212, avg=12465.10, stdev=9878.65 00:24:21.421 clat percentiles (usec): 00:24:21.421 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7177], 00:24:21.421 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[10945], 00:24:21.421 | 70.00th=[12125], 80.00th=[15401], 90.00th=[20317], 95.00th=[27919], 00:24:21.421 | 99.00th=[62129], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:24:21.421 | 99.99th=[83362] 00:24:21.421 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(115MiB/8797msec); 0 zone resets 00:24:21.421 slat (usec): min=34, max=26033, avg=163.89, stdev=872.06 00:24:21.421 clat (msec): min=3, max=240, avg=75.82, stdev=32.14 00:24:21.421 lat (msec): min=4, max=240, avg=75.99, stdev=32.21 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 21], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:24:21.421 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 70], 00:24:21.421 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 118], 95.00th=[ 146], 00:24:21.421 | 99.00th=[ 194], 99.50th=[ 218], 99.90th=[ 241], 99.95th=[ 241], 00:24:21.421 | 99.99th=[ 241] 00:24:21.421 bw ( KiB/s): min= 2304, max=21248, per=1.23%, avg=11695.80, stdev=5319.70, samples=20 00:24:21.421 iops : min= 18, max= 166, avg=91.25, stdev=41.52, samples=20 00:24:21.421 lat (msec) : 4=0.06%, 10=24.87%, 20=17.32%, 50=8.08%, 100=40.62% 00:24:21.421 lat (msec) : 250=9.06% 00:24:21.421 cpu : usr=0.56%, sys=0.48%, ctx=2753, majf=0, minf=1 00:24:21.421 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.421 issued rwts: total=800,921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.421 job21: (groupid=0, jobs=1): err= 0: pid=71687: Mon Jul 22 17:02:22 2024 00:24:21.421 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8801msec) 00:24:21.421 slat (usec): min=6, max=1186, avg=59.07, stdev=112.99 00:24:21.421 clat (usec): min=3996, max=61666, avg=12513.02, stdev=7332.34 00:24:21.421 lat (usec): min=4330, max=61698, avg=12572.09, stdev=7329.39 00:24:21.421 clat percentiles (usec): 00:24:21.421 | 1.00th=[ 4555], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7701], 00:24:21.421 | 30.00th=[ 8356], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[11994], 00:24:21.421 | 70.00th=[13435], 80.00th=[15270], 90.00th=[20579], 95.00th=[26346], 00:24:21.421 | 99.00th=[41681], 99.50th=[49021], 99.90th=[61604], 99.95th=[61604], 00:24:21.421 | 99.99th=[61604] 00:24:21.421 write: IOPS=102, BW=12.8MiB/s (13.4MB/s)(112MiB/8780msec); 0 zone resets 00:24:21.421 slat (usec): min=35, max=2250, avg=126.97, stdev=165.46 00:24:21.421 clat (msec): min=24, max=322, avg=77.73, stdev=41.62 00:24:21.421 lat (msec): min=24, max=322, avg=77.86, stdev=41.61 00:24:21.421 clat percentiles (msec): 00:24:21.421 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 54], 00:24:21.421 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 00:24:21.421 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 115], 95.00th=[ 161], 00:24:21.421 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 321], 99.95th=[ 321], 00:24:21.421 | 99.99th=[ 321] 00:24:21.421 bw ( KiB/s): min= 2560, max=17884, per=1.20%, avg=11374.10, stdev=5464.70, samples=20 00:24:21.422 iops : min= 20, max= 139, avg=88.70, stdev=42.67, samples=20 00:24:21.422 lat (msec) : 4=0.06%, 10=21.64%, 20=20.28%, 50=8.55%, 100=42.63% 00:24:21.422 lat (msec) : 250=5.96%, 500=0.88% 00:24:21.422 cpu : usr=0.56%, sys=0.41%, ctx=2755, majf=0, minf=1 00:24:21.422 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 issued rwts: total=800,896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.422 job22: (groupid=0, jobs=1): err= 0: pid=71689: Mon Jul 22 17:02:22 2024 00:24:21.422 read: IOPS=89, BW=11.1MiB/s (11.7MB/s)(100MiB/8984msec) 00:24:21.422 slat (usec): min=6, max=2977, avg=78.01, stdev=223.98 00:24:21.422 clat (usec): min=3282, max=78778, avg=11707.32, stdev=8896.43 00:24:21.422 lat (usec): min=3423, max=78815, avg=11785.33, stdev=8908.04 00:24:21.422 clat percentiles (usec): 00:24:21.422 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7046], 00:24:21.422 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10552], 00:24:21.422 | 70.00th=[11731], 80.00th=[13698], 90.00th=[17433], 95.00th=[23462], 00:24:21.422 | 99.00th=[61080], 99.50th=[70779], 99.90th=[79168], 99.95th=[79168], 00:24:21.422 | 99.99th=[79168] 00:24:21.422 write: IOPS=102, BW=12.9MiB/s (13.5MB/s)(114MiB/8861msec); 0 zone resets 00:24:21.422 slat (usec): min=32, max=2140, avg=130.39, stdev=187.06 00:24:21.422 clat (msec): min=13, max=333, avg=77.30, stdev=36.00 00:24:21.422 lat (msec): min=13, max=333, avg=77.43, stdev=36.02 00:24:21.422 clat percentiles (msec): 00:24:21.422 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:24:21.422 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.422 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 125], 95.00th=[ 153], 00:24:21.422 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 334], 99.95th=[ 334], 00:24:21.422 | 99.99th=[ 334] 00:24:21.422 bw ( KiB/s): min= 2560, max=20008, per=1.22%, avg=11554.45, stdev=4862.16, samples=20 00:24:21.422 iops : min= 20, max= 156, avg=90.10, stdev=37.93, samples=20 00:24:21.422 lat (msec) : 4=0.18%, 10=25.77%, 20=17.42%, 50=8.77%, 100=39.33% 00:24:21.422 lat (msec) : 250=8.30%, 500=0.23% 00:24:21.422 cpu : usr=0.57%, sys=0.39%, ctx=2790, majf=0, minf=3 00:24:21.422 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 issued rwts: total=800,911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.422 job23: (groupid=0, jobs=1): err= 0: pid=71693: Mon Jul 22 17:02:22 2024 00:24:21.422 read: IOPS=77, BW=9922KiB/s (10.2MB/s)(80.0MiB/8256msec) 00:24:21.422 slat (usec): min=6, max=1527, avg=66.06, stdev=135.89 00:24:21.422 clat (usec): min=2766, max=80141, avg=16433.71, stdev=14654.80 00:24:21.422 lat (usec): min=3192, max=80179, avg=16499.78, stdev=14657.39 00:24:21.422 clat percentiles (usec): 00:24:21.422 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 7504], 00:24:21.422 | 30.00th=[ 8356], 40.00th=[ 9896], 50.00th=[12780], 60.00th=[14746], 00:24:21.422 | 70.00th=[16909], 80.00th=[19792], 90.00th=[26346], 95.00th=[49021], 00:24:21.422 | 99.00th=[79168], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:24:21.422 | 99.99th=[80217] 00:24:21.422 write: IOPS=91, BW=11.4MiB/s (11.9MB/s)(99.2MiB/8715msec); 0 zone resets 00:24:21.422 slat (usec): min=36, max=6541, avg=162.19, stdev=335.34 00:24:21.422 clat (msec): min=48, max=261, avg=87.17, stdev=35.23 00:24:21.422 lat (msec): min=48, max=261, avg=87.33, stdev=35.23 00:24:21.422 clat percentiles (msec): 00:24:21.422 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 59], 00:24:21.422 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 89], 00:24:21.422 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 136], 95.00th=[ 157], 00:24:21.422 | 99.00th=[ 213], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 262], 00:24:21.422 | 99.99th=[ 262] 00:24:21.422 bw ( KiB/s): min= 768, max=16896, per=1.06%, avg=10067.15, stdev=3954.86, samples=20 00:24:21.422 iops : min= 6, max= 132, avg=78.45, stdev=30.98, samples=20 00:24:21.422 lat (msec) : 4=0.21%, 10=17.78%, 20=18.41%, 50=7.74%, 100=43.24% 00:24:21.422 lat (msec) : 250=12.34%, 500=0.28% 00:24:21.422 cpu : usr=0.55%, sys=0.29%, ctx=2388, majf=0, minf=5 00:24:21.422 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 issued rwts: total=640,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.422 job24: (groupid=0, jobs=1): err= 0: pid=71694: Mon Jul 22 17:02:22 2024 00:24:21.422 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8769msec) 00:24:21.422 slat (usec): min=6, max=2699, avg=68.46, stdev=157.21 00:24:21.422 clat (msec): min=3, max=133, avg=17.71, stdev=19.08 00:24:21.422 lat (msec): min=3, max=133, avg=17.78, stdev=19.08 00:24:21.422 clat percentiles (msec): 00:24:21.422 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:24:21.422 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:24:21.422 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 33], 95.00th=[ 49], 00:24:21.422 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 134], 00:24:21.422 | 99.99th=[ 134] 00:24:21.422 write: IOPS=98, BW=12.3MiB/s (12.9MB/s)(102MiB/8252msec); 0 zone resets 00:24:21.422 slat (usec): min=31, max=1501, avg=119.25, stdev=145.72 00:24:21.422 clat (msec): min=43, max=271, avg=80.26, stdev=33.19 00:24:21.422 lat (msec): min=43, max=271, avg=80.38, stdev=33.19 00:24:21.422 clat percentiles (msec): 00:24:21.422 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 55], 00:24:21.422 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 80], 00:24:21.422 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 114], 95.00th=[ 136], 00:24:21.422 | 99.00th=[ 230], 99.50th=[ 249], 99.90th=[ 271], 99.95th=[ 271], 00:24:21.422 | 99.99th=[ 271] 00:24:21.422 bw ( KiB/s): min= 2554, max=18176, per=1.09%, avg=10324.15, stdev=5099.09, samples=20 00:24:21.422 iops : min= 19, max= 142, avg=80.40, stdev=39.94, samples=20 00:24:21.422 lat (msec) : 4=0.12%, 10=17.09%, 20=22.23%, 50=10.96%, 100=39.57% 00:24:21.422 lat (msec) : 250=9.78%, 500=0.25% 00:24:21.422 cpu : usr=0.53%, sys=0.38%, ctx=2594, majf=0, minf=3 00:24:21.422 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.422 issued rwts: total=800,815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.422 job25: (groupid=0, jobs=1): err= 0: pid=71695: Mon Jul 22 17:02:22 2024 00:24:21.422 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(100MiB/9291msec) 00:24:21.422 slat (usec): min=7, max=5175, avg=64.98, stdev=210.87 00:24:21.422 clat (usec): min=395, max=240172, avg=12525.11, stdev=22261.92 00:24:21.422 lat (msec): min=3, max=240, avg=12.59, stdev=22.27 00:24:21.422 clat percentiles (msec): 00:24:21.422 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:24:21.422 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 11], 00:24:21.422 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 17], 95.00th=[ 22], 00:24:21.422 | 99.00th=[ 113], 99.50th=[ 236], 99.90th=[ 241], 99.95th=[ 241], 00:24:21.422 | 99.99th=[ 241] 00:24:21.422 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(115MiB/8800msec); 0 zone resets 00:24:21.422 slat (usec): min=32, max=5159, avg=132.46, stdev=257.13 00:24:21.422 clat (usec): min=609, max=209924, avg=76059.37, stdev=35358.90 00:24:21.422 lat (usec): min=671, max=209976, avg=76191.83, stdev=35383.21 00:24:21.422 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 50], 20.00th=[ 54], 00:24:21.423 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 77], 00:24:21.423 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 123], 95.00th=[ 148], 00:24:21.423 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 211], 99.95th=[ 211], 00:24:21.423 | 99.99th=[ 211] 00:24:21.423 bw ( KiB/s): min= 4096, max=31294, per=1.23%, avg=11676.05, stdev=5985.91, samples=20 00:24:21.423 iops : min= 32, max= 244, avg=91.15, stdev=46.73, samples=20 00:24:21.423 lat (usec) : 500=0.06%, 750=0.06%, 1000=0.06% 00:24:21.423 lat (msec) : 2=0.12%, 4=1.45%, 10=27.21%, 20=17.67%, 50=5.35% 00:24:21.423 lat (msec) : 100=36.86%, 250=11.16% 00:24:21.423 cpu : usr=0.66%, sys=0.33%, ctx=2804, majf=0, minf=1 00:24:21.423 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 issued rwts: total=800,920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.423 job26: (groupid=0, jobs=1): err= 0: pid=71696: Mon Jul 22 17:02:22 2024 00:24:21.423 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8939msec) 00:24:21.423 slat (usec): min=6, max=2412, avg=62.96, stdev=156.29 00:24:21.423 clat (msec): min=4, max=247, avg=18.71, stdev=20.87 00:24:21.423 lat (msec): min=5, max=247, avg=18.77, stdev=20.86 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:24:21.423 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:24:21.423 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 38], 00:24:21.423 | 99.00th=[ 111], 99.50th=[ 171], 99.90th=[ 247], 99.95th=[ 247], 00:24:21.423 | 99.99th=[ 247] 00:24:21.423 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(106MiB/8150msec); 0 zone resets 00:24:21.423 slat (usec): min=37, max=2906, avg=133.36, stdev=215.89 00:24:21.423 clat (msec): min=37, max=421, avg=76.04, stdev=42.79 00:24:21.423 lat (msec): min=37, max=421, avg=76.17, stdev=42.80 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 54], 00:24:21.423 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 69], 00:24:21.423 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 148], 00:24:21.423 | 99.00th=[ 279], 99.50th=[ 363], 99.90th=[ 422], 99.95th=[ 422], 00:24:21.423 | 99.99th=[ 422] 00:24:21.423 bw ( KiB/s): min= 1788, max=19456, per=1.26%, avg=11982.94, stdev=5447.12, samples=18 00:24:21.423 iops : min= 13, max= 152, avg=93.39, stdev=42.61, samples=18 00:24:21.423 lat (msec) : 10=10.61%, 20=24.91%, 50=16.67%, 100=41.27%, 250=5.70% 00:24:21.423 lat (msec) : 500=0.85% 00:24:21.423 cpu : usr=0.54%, sys=0.41%, ctx=2714, majf=0, minf=3 00:24:21.423 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 issued rwts: total=800,850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.423 job27: (groupid=0, jobs=1): err= 0: pid=71698: Mon Jul 22 17:02:22 2024 00:24:21.423 read: IOPS=76, BW=9762KiB/s (9996kB/s)(80.0MiB/8392msec) 00:24:21.423 slat (usec): min=6, max=1560, avg=61.61, stdev=127.87 00:24:21.423 clat (usec): min=3724, max=72926, avg=14424.23, stdev=12799.33 00:24:21.423 lat (usec): min=3744, max=72996, avg=14485.84, stdev=12800.42 00:24:21.423 clat percentiles (usec): 00:24:21.423 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 7242], 00:24:21.423 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[11600], 00:24:21.423 | 70.00th=[13304], 80.00th=[17957], 90.00th=[26084], 95.00th=[43779], 00:24:21.423 | 99.00th=[67634], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:24:21.423 | 99.99th=[72877] 00:24:21.423 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8845msec); 0 zone resets 00:24:21.423 slat (usec): min=38, max=7463, avg=172.02, stdev=376.37 00:24:21.423 clat (msec): min=48, max=276, avg=87.79, stdev=36.45 00:24:21.423 lat (msec): min=48, max=276, avg=87.96, stdev=36.45 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 60], 00:24:21.423 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 87], 00:24:21.423 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 133], 95.00th=[ 171], 00:24:21.423 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:24:21.423 | 99.99th=[ 275] 00:24:21.423 bw ( KiB/s): min= 766, max=17920, per=1.05%, avg=9955.11, stdev=4271.77, samples=19 00:24:21.423 iops : min= 5, max= 140, avg=77.58, stdev=33.51, samples=19 00:24:21.423 lat (msec) : 4=0.21%, 10=22.78%, 20=14.24%, 50=6.04%, 100=42.85% 00:24:21.423 lat (msec) : 250=13.75%, 500=0.14% 00:24:21.423 cpu : usr=0.51%, sys=0.33%, ctx=2457, majf=0, minf=5 00:24:21.423 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.423 job28: (groupid=0, jobs=1): err= 0: pid=71699: Mon Jul 22 17:02:22 2024 00:24:21.423 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(100MiB/8910msec) 00:24:21.423 slat (usec): min=7, max=1565, avg=60.25, stdev=118.95 00:24:21.423 clat (msec): min=3, max=161, avg=18.71, stdev=18.42 00:24:21.423 lat (msec): min=3, max=161, avg=18.77, stdev=18.42 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:24:21.423 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:24:21.423 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 30], 95.00th=[ 54], 00:24:21.423 | 99.00th=[ 75], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 161], 00:24:21.423 | 99.99th=[ 161] 00:24:21.423 write: IOPS=100, BW=12.5MiB/s (13.1MB/s)(102MiB/8146msec); 0 zone resets 00:24:21.423 slat (usec): min=32, max=9086, avg=147.37, stdev=402.95 00:24:21.423 clat (msec): min=31, max=302, avg=78.98, stdev=37.44 00:24:21.423 lat (msec): min=32, max=302, avg=79.12, stdev=37.43 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:24:21.423 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 74], 00:24:21.423 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 113], 95.00th=[ 163], 00:24:21.423 | 99.00th=[ 236], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 305], 00:24:21.423 | 99.99th=[ 305] 00:24:21.423 bw ( KiB/s): min= 2308, max=18944, per=1.09%, avg=10353.75, stdev=5427.46, samples=20 00:24:21.423 iops : min= 18, max= 148, avg=80.85, stdev=42.36, samples=20 00:24:21.423 lat (msec) : 4=0.12%, 10=11.26%, 20=25.06%, 50=14.36%, 100=40.97% 00:24:21.423 lat (msec) : 250=7.80%, 500=0.43% 00:24:21.423 cpu : usr=0.51%, sys=0.45%, ctx=2646, majf=0, minf=1 00:24:21.423 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 issued rwts: total=800,816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.423 job29: (groupid=0, jobs=1): err= 0: pid=71700: Mon Jul 22 17:02:22 2024 00:24:21.423 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7975msec) 00:24:21.423 slat (usec): min=6, max=1650, avg=52.59, stdev=126.50 00:24:21.423 clat (usec): min=3510, max=54420, avg=9910.47, stdev=7183.89 00:24:21.423 lat (usec): min=3551, max=54436, avg=9963.06, stdev=7181.10 00:24:21.423 clat percentiles (usec): 00:24:21.423 | 1.00th=[ 4178], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5604], 00:24:21.423 | 30.00th=[ 6521], 40.00th=[ 7308], 50.00th=[ 7963], 60.00th=[ 8586], 00:24:21.423 | 70.00th=[ 9765], 80.00th=[12256], 90.00th=[16450], 95.00th=[20317], 00:24:21.423 | 99.00th=[50070], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:24:21.423 | 99.99th=[54264] 00:24:21.423 write: IOPS=86, BW=10.9MiB/s (11.4MB/s)(100MiB/9209msec); 0 zone resets 00:24:21.423 slat (usec): min=30, max=2535, avg=142.81, stdev=225.72 00:24:21.423 clat (msec): min=48, max=281, avg=91.57, stdev=40.81 00:24:21.423 lat (msec): min=48, max=281, avg=91.72, stdev=40.81 00:24:21.423 clat percentiles (msec): 00:24:21.423 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 59], 00:24:21.423 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 90], 00:24:21.423 | 70.00th=[ 99], 80.00th=[ 114], 90.00th=[ 150], 95.00th=[ 180], 00:24:21.423 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:24:21.423 | 99.99th=[ 284] 00:24:21.423 bw ( KiB/s): min= 3328, max=15872, per=1.07%, avg=10144.00, stdev=2965.31, samples=20 00:24:21.423 iops : min= 26, max= 124, avg=79.00, stdev=23.16, samples=20 00:24:21.423 lat (msec) : 4=0.21%, 10=31.53%, 20=10.28%, 50=4.79%, 100=37.15% 00:24:21.423 lat (msec) : 250=15.76%, 500=0.28% 00:24:21.423 cpu : usr=0.52%, sys=0.29%, ctx=2397, majf=0, minf=1 00:24:21.423 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.423 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.423 job30: (groupid=0, jobs=1): err= 0: pid=71707: Mon Jul 22 17:02:22 2024 00:24:21.423 read: IOPS=57, BW=7331KiB/s (7507kB/s)(60.0MiB/8381msec) 00:24:21.423 slat (usec): min=7, max=1624, avg=78.95, stdev=170.71 00:24:21.423 clat (usec): min=10595, max=61635, avg=23091.63, stdev=8074.72 00:24:21.423 lat (usec): min=12219, max=61644, avg=23170.59, stdev=8071.69 00:24:21.423 clat percentiles (usec): 00:24:21.423 | 1.00th=[13304], 5.00th=[13960], 10.00th=[14484], 20.00th=[16188], 00:24:21.423 | 30.00th=[17695], 40.00th=[19530], 50.00th=[22152], 60.00th=[23462], 00:24:21.423 | 70.00th=[25822], 80.00th=[28181], 90.00th=[33162], 95.00th=[39060], 00:24:21.423 | 99.00th=[52167], 99.50th=[54264], 99.90th=[61604], 99.95th=[61604], 00:24:21.423 | 99.99th=[61604] 00:24:21.423 write: IOPS=73, BW=9451KiB/s (9678kB/s)(79.9MiB/8654msec); 0 zone resets 00:24:21.423 slat (usec): min=40, max=2178, avg=137.24, stdev=192.47 00:24:21.423 clat (msec): min=46, max=448, avg=107.51, stdev=52.09 00:24:21.424 lat (msec): min=46, max=448, avg=107.64, stdev=52.09 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 54], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:24:21.424 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:24:21.424 | 70.00th=[ 104], 80.00th=[ 124], 90.00th=[ 171], 95.00th=[ 224], 00:24:21.424 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 447], 99.95th=[ 447], 00:24:21.424 | 99.99th=[ 447] 00:24:21.424 bw ( KiB/s): min= 1536, max=13568, per=0.89%, avg=8499.79, stdev=3954.00, samples=19 00:24:21.424 iops : min= 12, max= 106, avg=66.26, stdev=30.81, samples=19 00:24:21.424 lat (msec) : 20=18.32%, 50=24.31%, 100=38.70%, 250=16.62%, 500=2.06% 00:24:21.424 cpu : usr=0.39%, sys=0.29%, ctx=1900, majf=0, minf=3 00:24:21.424 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 issued rwts: total=480,639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.424 job31: (groupid=0, jobs=1): err= 0: pid=71708: Mon Jul 22 17:02:22 2024 00:24:21.424 read: IOPS=66, BW=8501KiB/s (8705kB/s)(60.0MiB/7227msec) 00:24:21.424 slat (usec): min=6, max=1147, avg=60.57, stdev=110.13 00:24:21.424 clat (msec): min=3, max=439, avg=28.50, stdev=58.45 00:24:21.424 lat (msec): min=4, max=439, avg=28.56, stdev=58.45 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:24:21.424 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.424 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 34], 95.00th=[ 176], 00:24:21.424 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 439], 99.95th=[ 439], 00:24:21.424 | 99.99th=[ 439] 00:24:21.424 write: IOPS=58, BW=7502KiB/s (7682kB/s)(61.1MiB/8343msec); 0 zone resets 00:24:21.424 slat (usec): min=36, max=1799, avg=149.45, stdev=194.50 00:24:21.424 clat (msec): min=71, max=421, avg=135.49, stdev=50.81 00:24:21.424 lat (msec): min=71, max=421, avg=135.64, stdev=50.83 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 88], 00:24:21.424 | 30.00th=[ 99], 40.00th=[ 112], 50.00th=[ 126], 60.00th=[ 144], 00:24:21.424 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 203], 95.00th=[ 224], 00:24:21.424 | 99.00th=[ 300], 99.50th=[ 326], 99.90th=[ 422], 99.95th=[ 422], 00:24:21.424 | 99.99th=[ 422] 00:24:21.424 bw ( KiB/s): min= 510, max=11008, per=0.65%, avg=6165.50, stdev=2983.39, samples=20 00:24:21.424 iops : min= 3, max= 86, avg=47.90, stdev=23.29, samples=20 00:24:21.424 lat (msec) : 4=0.10%, 10=13.52%, 20=25.08%, 50=6.30%, 100=17.75% 00:24:21.424 lat (msec) : 250=35.09%, 500=2.17% 00:24:21.424 cpu : usr=0.39%, sys=0.15%, ctx=1704, majf=0, minf=7 00:24:21.424 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 issued rwts: total=480,489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.424 job32: (groupid=0, jobs=1): err= 0: pid=71709: Mon Jul 22 17:02:22 2024 00:24:21.424 read: IOPS=65, BW=8380KiB/s (8581kB/s)(60.0MiB/7332msec) 00:24:21.424 slat (usec): min=6, max=618, avg=49.18, stdev=70.13 00:24:21.424 clat (usec): min=5128, max=55648, avg=13808.88, stdev=8210.72 00:24:21.424 lat (usec): min=5236, max=55661, avg=13858.07, stdev=8206.41 00:24:21.424 clat percentiles (usec): 00:24:21.424 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 8029], 00:24:21.424 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11600], 60.00th=[12387], 00:24:21.424 | 70.00th=[13960], 80.00th=[16319], 90.00th=[23725], 95.00th=[30278], 00:24:21.424 | 99.00th=[50070], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:24:21.424 | 99.99th=[55837] 00:24:21.424 write: IOPS=62, BW=7948KiB/s (8139kB/s)(71.6MiB/9228msec); 0 zone resets 00:24:21.424 slat (usec): min=39, max=2633, avg=155.97, stdev=256.19 00:24:21.424 clat (msec): min=63, max=385, avg=127.60, stdev=59.45 00:24:21.424 lat (msec): min=63, max=386, avg=127.75, stdev=59.48 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 80], 00:24:21.424 | 30.00th=[ 87], 40.00th=[ 93], 50.00th=[ 106], 60.00th=[ 134], 00:24:21.424 | 70.00th=[ 153], 80.00th=[ 167], 90.00th=[ 197], 95.00th=[ 245], 00:24:21.424 | 99.00th=[ 355], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:24:21.424 | 99.99th=[ 388] 00:24:21.424 bw ( KiB/s): min= 2048, max=12263, per=0.76%, avg=7238.30, stdev=3400.59, samples=20 00:24:21.424 iops : min= 16, max= 95, avg=56.35, stdev=26.52, samples=20 00:24:21.424 lat (msec) : 10=13.96%, 20=25.07%, 50=6.08%, 100=25.83%, 250=26.69% 00:24:21.424 lat (msec) : 500=2.37% 00:24:21.424 cpu : usr=0.36%, sys=0.24%, ctx=1813, majf=0, minf=3 00:24:21.424 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 issued rwts: total=480,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.424 job33: (groupid=0, jobs=1): err= 0: pid=71710: Mon Jul 22 17:02:22 2024 00:24:21.424 read: IOPS=58, BW=7442KiB/s (7620kB/s)(60.0MiB/8256msec) 00:24:21.424 slat (usec): min=6, max=3040, avg=94.62, stdev=247.42 00:24:21.424 clat (usec): min=10368, max=68530, avg=26218.54, stdev=10866.22 00:24:21.424 lat (usec): min=10390, max=68817, avg=26313.16, stdev=10914.49 00:24:21.424 clat percentiles (usec): 00:24:21.424 | 1.00th=[11469], 5.00th=[12256], 10.00th=[13173], 20.00th=[17433], 00:24:21.424 | 30.00th=[21365], 40.00th=[22676], 50.00th=[23200], 60.00th=[26608], 00:24:21.424 | 70.00th=[28967], 80.00th=[33817], 90.00th=[40633], 95.00th=[44827], 00:24:21.424 | 99.00th=[60556], 99.50th=[64750], 99.90th=[68682], 99.95th=[68682], 00:24:21.424 | 99.99th=[68682] 00:24:21.424 write: IOPS=72, BW=9340KiB/s (9565kB/s)(77.2MiB/8469msec); 0 zone resets 00:24:21.424 slat (usec): min=32, max=7974, avg=134.67, stdev=346.06 00:24:21.424 clat (msec): min=55, max=608, avg=108.66, stdev=62.26 00:24:21.424 lat (msec): min=55, max=608, avg=108.79, stdev=62.27 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 62], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:24:21.424 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 97], 00:24:21.424 | 70.00th=[ 107], 80.00th=[ 126], 90.00th=[ 159], 95.00th=[ 205], 00:24:21.424 | 99.00th=[ 430], 99.50th=[ 558], 99.90th=[ 609], 99.95th=[ 609], 00:24:21.424 | 99.99th=[ 609] 00:24:21.424 bw ( KiB/s): min= 1024, max=13568, per=0.91%, avg=8688.50, stdev=3854.17, samples=18 00:24:21.424 iops : min= 8, max= 106, avg=67.78, stdev=30.06, samples=18 00:24:21.424 lat (msec) : 20=10.47%, 50=31.42%, 100=37.70%, 250=18.76%, 500=1.18% 00:24:21.424 lat (msec) : 750=0.46% 00:24:21.424 cpu : usr=0.43%, sys=0.23%, ctx=1856, majf=0, minf=3 00:24:21.424 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 issued rwts: total=480,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.424 job34: (groupid=0, jobs=1): err= 0: pid=71711: Mon Jul 22 17:02:22 2024 00:24:21.424 read: IOPS=57, BW=7414KiB/s (7592kB/s)(61.5MiB/8494msec) 00:24:21.424 slat (usec): min=6, max=3491, avg=78.42, stdev=253.73 00:24:21.424 clat (usec): min=4853, max=53056, avg=14138.33, stdev=7060.76 00:24:21.424 lat (usec): min=4869, max=53069, avg=14216.75, stdev=7051.32 00:24:21.424 clat percentiles (usec): 00:24:21.424 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 9241], 00:24:21.424 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11863], 60.00th=[13042], 00:24:21.424 | 70.00th=[15401], 80.00th=[17433], 90.00th=[23462], 95.00th=[28443], 00:24:21.424 | 99.00th=[40109], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:24:21.424 | 99.99th=[53216] 00:24:21.424 write: IOPS=70, BW=8973KiB/s (9188kB/s)(80.0MiB/9130msec); 0 zone resets 00:24:21.424 slat (usec): min=39, max=8424, avg=160.95, stdev=410.03 00:24:21.424 clat (msec): min=9, max=350, avg=113.33, stdev=51.07 00:24:21.424 lat (msec): min=9, max=350, avg=113.49, stdev=51.08 00:24:21.424 clat percentiles (msec): 00:24:21.424 | 1.00th=[ 19], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:24:21.424 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 104], 00:24:21.424 | 70.00th=[ 118], 80.00th=[ 157], 90.00th=[ 194], 95.00th=[ 218], 00:24:21.424 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 351], 99.95th=[ 351], 00:24:21.424 | 99.99th=[ 351] 00:24:21.424 bw ( KiB/s): min= 1024, max=14592, per=0.86%, avg=8192.25, stdev=3749.51, samples=20 00:24:21.424 iops : min= 8, max= 114, avg=63.95, stdev=29.29, samples=20 00:24:21.424 lat (msec) : 10=12.19%, 20=25.09%, 50=7.51%, 100=30.48%, 250=23.50% 00:24:21.424 lat (msec) : 500=1.24% 00:24:21.424 cpu : usr=0.39%, sys=0.31%, ctx=1800, majf=0, minf=7 00:24:21.424 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.424 issued rwts: total=492,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.424 job35: (groupid=0, jobs=1): err= 0: pid=71712: Mon Jul 22 17:02:22 2024 00:24:21.424 read: IOPS=56, BW=7185KiB/s (7358kB/s)(60.0MiB/8551msec) 00:24:21.424 slat (usec): min=8, max=3467, avg=80.72, stdev=231.67 00:24:21.424 clat (usec): min=4612, max=97806, avg=19999.51, stdev=12594.27 00:24:21.424 lat (usec): min=4722, max=97831, avg=20080.23, stdev=12601.70 00:24:21.424 clat percentiles (usec): 00:24:21.424 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12387], 00:24:21.424 | 30.00th=[13829], 40.00th=[14877], 50.00th=[16319], 60.00th=[18220], 00:24:21.424 | 70.00th=[20055], 80.00th=[22676], 90.00th=[30802], 95.00th=[47449], 00:24:21.424 | 99.00th=[72877], 99.50th=[74974], 99.90th=[98042], 99.95th=[98042], 00:24:21.424 | 99.99th=[98042] 00:24:21.424 write: IOPS=71, BW=9184KiB/s (9405kB/s)(79.4MiB/8850msec); 0 zone resets 00:24:21.424 slat (usec): min=41, max=1857, avg=135.50, stdev=209.23 00:24:21.425 clat (msec): min=4, max=361, avg=110.72, stdev=51.58 00:24:21.425 lat (msec): min=4, max=362, avg=110.86, stdev=51.59 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 21], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.425 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 95], 60.00th=[ 102], 00:24:21.425 | 70.00th=[ 109], 80.00th=[ 131], 90.00th=[ 182], 95.00th=[ 220], 00:24:21.425 | 99.00th=[ 313], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 363], 00:24:21.425 | 99.99th=[ 363] 00:24:21.425 bw ( KiB/s): min= 1792, max=15616, per=0.89%, avg=8461.63, stdev=3773.09, samples=19 00:24:21.425 iops : min= 14, max= 122, avg=66.05, stdev=29.46, samples=19 00:24:21.425 lat (msec) : 10=2.69%, 20=27.62%, 50=12.29%, 100=33.36%, 250=22.87% 00:24:21.425 lat (msec) : 500=1.17% 00:24:21.425 cpu : usr=0.44%, sys=0.26%, ctx=1732, majf=0, minf=3 00:24:21.425 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 issued rwts: total=480,635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.425 job36: (groupid=0, jobs=1): err= 0: pid=71713: Mon Jul 22 17:02:22 2024 00:24:21.425 read: IOPS=65, BW=8399KiB/s (8601kB/s)(56.4MiB/6873msec) 00:24:21.425 slat (usec): min=7, max=1971, avg=67.19, stdev=160.24 00:24:21.425 clat (msec): min=5, max=211, avg=26.86, stdev=37.21 00:24:21.425 lat (msec): min=5, max=211, avg=26.93, stdev=37.21 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.425 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:24:21.425 | 70.00th=[ 22], 80.00th=[ 28], 90.00th=[ 51], 95.00th=[ 140], 00:24:21.425 | 99.00th=[ 186], 99.50th=[ 207], 99.90th=[ 211], 99.95th=[ 211], 00:24:21.425 | 99.99th=[ 211] 00:24:21.425 write: IOPS=56, BW=7242KiB/s (7416kB/s)(60.0MiB/8484msec); 0 zone resets 00:24:21.425 slat (usec): min=35, max=1718, avg=138.57, stdev=170.05 00:24:21.425 clat (msec): min=68, max=398, avg=140.69, stdev=49.86 00:24:21.425 lat (msec): min=69, max=398, avg=140.83, stdev=49.86 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 73], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 96], 00:24:21.425 | 30.00th=[ 103], 40.00th=[ 117], 50.00th=[ 138], 60.00th=[ 150], 00:24:21.425 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 207], 95.00th=[ 224], 00:24:21.425 | 99.00th=[ 288], 99.50th=[ 334], 99.90th=[ 401], 99.95th=[ 401], 00:24:21.425 | 99.99th=[ 401] 00:24:21.425 bw ( KiB/s): min= 1792, max=11008, per=0.65%, avg=6181.74, stdev=2843.85, samples=19 00:24:21.425 iops : min= 14, max= 86, avg=48.05, stdev=22.35, samples=19 00:24:21.425 lat (msec) : 10=11.39%, 20=20.41%, 50=11.71%, 100=16.11%, 250=39.42% 00:24:21.425 lat (msec) : 500=0.97% 00:24:21.425 cpu : usr=0.39%, sys=0.18%, ctx=1524, majf=0, minf=7 00:24:21.425 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 issued rwts: total=451,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.425 job37: (groupid=0, jobs=1): err= 0: pid=71714: Mon Jul 22 17:02:22 2024 00:24:21.425 read: IOPS=56, BW=7261KiB/s (7435kB/s)(60.0MiB/8462msec) 00:24:21.425 slat (usec): min=8, max=2134, avg=70.71, stdev=174.02 00:24:21.425 clat (usec): min=7355, max=70554, avg=20883.52, stdev=10165.61 00:24:21.425 lat (usec): min=7434, max=70579, avg=20954.23, stdev=10162.67 00:24:21.425 clat percentiles (usec): 00:24:21.425 | 1.00th=[10028], 5.00th=[11469], 10.00th=[12518], 20.00th=[14091], 00:24:21.425 | 30.00th=[15139], 40.00th=[16188], 50.00th=[17695], 60.00th=[19792], 00:24:21.425 | 70.00th=[21890], 80.00th=[25560], 90.00th=[32113], 95.00th=[42730], 00:24:21.425 | 99.00th=[63177], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:24:21.425 | 99.99th=[70779] 00:24:21.425 write: IOPS=72, BW=9296KiB/s (9520kB/s)(80.0MiB/8812msec); 0 zone resets 00:24:21.425 slat (usec): min=41, max=6238, avg=155.59, stdev=334.16 00:24:21.425 clat (msec): min=19, max=380, avg=109.41, stdev=51.01 00:24:21.425 lat (msec): min=19, max=380, avg=109.57, stdev=51.02 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 31], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 78], 00:24:21.425 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 97], 00:24:21.425 | 70.00th=[ 108], 80.00th=[ 129], 90.00th=[ 182], 95.00th=[ 222], 00:24:21.425 | 99.00th=[ 305], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:24:21.425 | 99.99th=[ 380] 00:24:21.425 bw ( KiB/s): min= 1024, max=13056, per=0.85%, avg=8086.95, stdev=4025.90, samples=20 00:24:21.425 iops : min= 8, max= 102, avg=63.05, stdev=31.42, samples=20 00:24:21.425 lat (msec) : 10=0.54%, 20=25.89%, 50=15.71%, 100=36.96%, 250=19.55% 00:24:21.425 lat (msec) : 500=1.34% 00:24:21.425 cpu : usr=0.43%, sys=0.27%, ctx=1881, majf=0, minf=6 00:24:21.425 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.425 job38: (groupid=0, jobs=1): err= 0: pid=71715: Mon Jul 22 17:02:22 2024 00:24:21.425 read: IOPS=57, BW=7355KiB/s (7532kB/s)(60.0MiB/8353msec) 00:24:21.425 slat (usec): min=7, max=2698, avg=82.90, stdev=194.32 00:24:21.425 clat (usec): min=10805, max=71237, avg=23272.96, stdev=10325.61 00:24:21.425 lat (usec): min=10844, max=71252, avg=23355.86, stdev=10344.60 00:24:21.425 clat percentiles (usec): 00:24:21.425 | 1.00th=[11076], 5.00th=[12256], 10.00th=[13435], 20.00th=[14484], 00:24:21.425 | 30.00th=[16319], 40.00th=[18744], 50.00th=[21103], 60.00th=[23462], 00:24:21.425 | 70.00th=[25822], 80.00th=[28967], 90.00th=[37487], 95.00th=[43779], 00:24:21.425 | 99.00th=[60556], 99.50th=[61604], 99.90th=[70779], 99.95th=[70779], 00:24:21.425 | 99.99th=[70779] 00:24:21.425 write: IOPS=71, BW=9120KiB/s (9339kB/s)(77.1MiB/8660msec); 0 zone resets 00:24:21.425 slat (usec): min=41, max=2537, avg=145.95, stdev=199.00 00:24:21.425 clat (msec): min=31, max=553, avg=111.30, stdev=67.31 00:24:21.425 lat (msec): min=31, max=553, avg=111.44, stdev=67.30 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 39], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:24:21.425 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 99], 00:24:21.425 | 70.00th=[ 107], 80.00th=[ 124], 90.00th=[ 171], 95.00th=[ 224], 00:24:21.425 | 99.00th=[ 481], 99.50th=[ 531], 99.90th=[ 558], 99.95th=[ 558], 00:24:21.425 | 99.99th=[ 558] 00:24:21.425 bw ( KiB/s): min= 1021, max=13056, per=0.91%, avg=8659.33, stdev=3773.90, samples=18 00:24:21.425 iops : min= 7, max= 102, avg=67.50, stdev=29.56, samples=18 00:24:21.425 lat (msec) : 20=19.96%, 50=23.61%, 100=35.10%, 250=19.23%, 500=1.73% 00:24:21.425 lat (msec) : 750=0.36% 00:24:21.425 cpu : usr=0.37%, sys=0.29%, ctx=1827, majf=0, minf=6 00:24:21.425 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 issued rwts: total=480,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.425 job39: (groupid=0, jobs=1): err= 0: pid=71716: Mon Jul 22 17:02:22 2024 00:24:21.425 read: IOPS=57, BW=7397KiB/s (7575kB/s)(60.0MiB/8306msec) 00:24:21.425 slat (usec): min=6, max=895, avg=63.18, stdev=116.16 00:24:21.425 clat (msec): min=5, max=250, avg=25.18, stdev=28.90 00:24:21.425 lat (msec): min=5, max=250, avg=25.25, stdev=28.91 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:24:21.425 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 23], 00:24:21.425 | 70.00th=[ 26], 80.00th=[ 30], 90.00th=[ 40], 95.00th=[ 48], 00:24:21.425 | 99.00th=[ 165], 99.50th=[ 249], 99.90th=[ 251], 99.95th=[ 251], 00:24:21.425 | 99.99th=[ 251] 00:24:21.425 write: IOPS=65, BW=8333KiB/s (8533kB/s)(69.4MiB/8525msec); 0 zone resets 00:24:21.425 slat (usec): min=34, max=3889, avg=131.03, stdev=225.16 00:24:21.425 clat (msec): min=38, max=506, avg=121.87, stdev=61.51 00:24:21.425 lat (msec): min=39, max=506, avg=122.00, stdev=61.52 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 45], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 78], 00:24:21.425 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 114], 00:24:21.425 | 70.00th=[ 131], 80.00th=[ 159], 90.00th=[ 192], 95.00th=[ 257], 00:24:21.425 | 99.00th=[ 368], 99.50th=[ 405], 99.90th=[ 506], 99.95th=[ 506], 00:24:21.425 | 99.99th=[ 506] 00:24:21.425 bw ( KiB/s): min= 768, max=13851, per=0.82%, avg=7791.44, stdev=3558.28, samples=18 00:24:21.425 iops : min= 6, max= 108, avg=60.67, stdev=27.75, samples=18 00:24:21.425 lat (msec) : 10=8.41%, 20=16.52%, 50=20.19%, 100=25.70%, 250=26.28% 00:24:21.425 lat (msec) : 500=2.80%, 750=0.10% 00:24:21.425 cpu : usr=0.42%, sys=0.19%, ctx=1718, majf=0, minf=7 00:24:21.425 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.425 issued rwts: total=480,555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.425 job40: (groupid=0, jobs=1): err= 0: pid=71717: Mon Jul 22 17:02:22 2024 00:24:21.425 read: IOPS=65, BW=8380KiB/s (8581kB/s)(60.0MiB/7332msec) 00:24:21.425 slat (usec): min=7, max=1004, avg=59.87, stdev=105.11 00:24:21.425 clat (msec): min=4, max=217, avg=29.92, stdev=37.40 00:24:21.425 lat (msec): min=4, max=217, avg=29.98, stdev=37.39 00:24:21.425 clat percentiles (msec): 00:24:21.425 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 13], 00:24:21.425 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:24:21.425 | 70.00th=[ 25], 80.00th=[ 31], 90.00th=[ 57], 95.00th=[ 128], 00:24:21.425 | 99.00th=[ 192], 99.50th=[ 211], 99.90th=[ 218], 99.95th=[ 218], 00:24:21.425 | 99.99th=[ 218] 00:24:21.426 write: IOPS=59, BW=7617KiB/s (7800kB/s)(61.4MiB/8251msec); 0 zone resets 00:24:21.426 slat (usec): min=37, max=2138, avg=152.90, stdev=217.19 00:24:21.426 clat (msec): min=71, max=400, avg=133.40, stdev=60.26 00:24:21.426 lat (msec): min=71, max=400, avg=133.55, stdev=60.28 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 85], 00:24:21.426 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 110], 60.00th=[ 128], 00:24:21.426 | 70.00th=[ 148], 80.00th=[ 190], 90.00th=[ 226], 95.00th=[ 247], 00:24:21.426 | 99.00th=[ 317], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:24:21.426 | 99.99th=[ 401] 00:24:21.426 bw ( KiB/s): min= 1021, max=12032, per=0.72%, avg=6879.50, stdev=3427.75, samples=18 00:24:21.426 iops : min= 7, max= 94, avg=53.44, stdev=26.90, samples=18 00:24:21.426 lat (msec) : 10=5.66%, 20=24.72%, 50=13.90%, 100=21.83%, 250=31.62% 00:24:21.426 lat (msec) : 500=2.27% 00:24:21.426 cpu : usr=0.34%, sys=0.21%, ctx=1642, majf=0, minf=3 00:24:21.426 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 issued rwts: total=480,491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.426 job41: (groupid=0, jobs=1): err= 0: pid=71718: Mon Jul 22 17:02:22 2024 00:24:21.426 read: IOPS=53, BW=6867KiB/s (7032kB/s)(44.8MiB/6673msec) 00:24:21.426 slat (usec): min=6, max=1646, avg=85.10, stdev=175.67 00:24:21.426 clat (msec): min=4, max=386, avg=35.19, stdev=67.09 00:24:21.426 lat (msec): min=4, max=386, avg=35.28, stdev=67.08 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:24:21.426 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 18], 00:24:21.426 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 71], 95.00th=[ 124], 00:24:21.426 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:24:21.426 | 99.99th=[ 388] 00:24:21.426 write: IOPS=56, BW=7295KiB/s (7470kB/s)(60.0MiB/8422msec); 0 zone resets 00:24:21.426 slat (usec): min=37, max=1973, avg=145.39, stdev=214.35 00:24:21.426 clat (msec): min=71, max=305, avg=139.64, stdev=58.77 00:24:21.426 lat (msec): min=71, max=305, avg=139.79, stdev=58.77 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 72], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 87], 00:24:21.426 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 116], 60.00th=[ 144], 00:24:21.426 | 70.00th=[ 163], 80.00th=[ 207], 90.00th=[ 230], 95.00th=[ 249], 00:24:21.426 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:24:21.426 | 99.99th=[ 305] 00:24:21.426 bw ( KiB/s): min= 510, max=12032, per=0.71%, avg=6775.24, stdev=3057.27, samples=17 00:24:21.426 iops : min= 3, max= 94, avg=52.76, stdev=24.02, samples=17 00:24:21.426 lat (msec) : 10=7.76%, 20=21.00%, 50=9.31%, 100=21.72%, 250=35.68% 00:24:21.426 lat (msec) : 500=4.53% 00:24:21.426 cpu : usr=0.31%, sys=0.18%, ctx=1406, majf=0, minf=11 00:24:21.426 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 issued rwts: total=358,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.426 job42: (groupid=0, jobs=1): err= 0: pid=71719: Mon Jul 22 17:02:22 2024 00:24:21.426 read: IOPS=58, BW=7444KiB/s (7622kB/s)(61.4MiB/8443msec) 00:24:21.426 slat (usec): min=6, max=1133, avg=70.70, stdev=128.78 00:24:21.426 clat (usec): min=5539, max=84017, avg=20281.27, stdev=10037.15 00:24:21.426 lat (usec): min=6672, max=84047, avg=20351.98, stdev=10039.99 00:24:21.426 clat percentiles (usec): 00:24:21.426 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[11207], 20.00th=[13042], 00:24:21.426 | 30.00th=[14484], 40.00th=[16450], 50.00th=[17695], 60.00th=[19792], 00:24:21.426 | 70.00th=[21890], 80.00th=[26346], 90.00th=[32375], 95.00th=[38536], 00:24:21.426 | 99.00th=[64750], 99.50th=[72877], 99.90th=[84411], 99.95th=[84411], 00:24:21.426 | 99.99th=[84411] 00:24:21.426 write: IOPS=73, BW=9353KiB/s (9577kB/s)(80.0MiB/8759msec); 0 zone resets 00:24:21.426 slat (usec): min=40, max=1949, avg=126.73, stdev=166.86 00:24:21.426 clat (msec): min=4, max=297, avg=108.71, stdev=51.04 00:24:21.426 lat (msec): min=4, max=297, avg=108.83, stdev=51.06 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 10], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.426 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 99], 00:24:21.426 | 70.00th=[ 108], 80.00th=[ 130], 90.00th=[ 201], 95.00th=[ 226], 00:24:21.426 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 296], 00:24:21.426 | 99.99th=[ 296] 00:24:21.426 bw ( KiB/s): min= 1792, max=13796, per=0.91%, avg=8622.16, stdev=3765.90, samples=19 00:24:21.426 iops : min= 14, max= 107, avg=67.26, stdev=29.36, samples=19 00:24:21.426 lat (msec) : 10=3.09%, 20=24.49%, 50=16.71%, 100=33.51%, 250=20.78% 00:24:21.426 lat (msec) : 500=1.41% 00:24:21.426 cpu : usr=0.44%, sys=0.26%, ctx=1838, majf=0, minf=5 00:24:21.426 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 issued rwts: total=491,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.426 job43: (groupid=0, jobs=1): err= 0: pid=71720: Mon Jul 22 17:02:22 2024 00:24:21.426 read: IOPS=58, BW=7500KiB/s (7680kB/s)(60.0MiB/8192msec) 00:24:21.426 slat (usec): min=7, max=1615, avg=75.67, stdev=151.39 00:24:21.426 clat (usec): min=8704, max=95099, avg=29321.54, stdev=14840.10 00:24:21.426 lat (usec): min=8718, max=95117, avg=29397.21, stdev=14843.24 00:24:21.426 clat percentiles (usec): 00:24:21.426 | 1.00th=[10159], 5.00th=[12387], 10.00th=[14615], 20.00th=[19268], 00:24:21.426 | 30.00th=[21890], 40.00th=[23725], 50.00th=[25560], 60.00th=[27657], 00:24:21.426 | 70.00th=[31589], 80.00th=[36439], 90.00th=[46400], 95.00th=[64750], 00:24:21.426 | 99.00th=[83362], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:24:21.426 | 99.99th=[94897] 00:24:21.426 write: IOPS=73, BW=9373KiB/s (9598kB/s)(75.9MiB/8289msec); 0 zone resets 00:24:21.426 slat (usec): min=39, max=4195, avg=145.11, stdev=295.76 00:24:21.426 clat (msec): min=62, max=372, avg=107.92, stdev=51.97 00:24:21.426 lat (msec): min=63, max=372, avg=108.06, stdev=51.99 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.426 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 97], 00:24:21.426 | 70.00th=[ 107], 80.00th=[ 126], 90.00th=[ 155], 95.00th=[ 220], 00:24:21.426 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:24:21.426 | 99.99th=[ 372] 00:24:21.426 bw ( KiB/s): min= 1024, max=13824, per=0.90%, avg=8530.33, stdev=3839.19, samples=18 00:24:21.426 iops : min= 8, max= 108, avg=66.50, stdev=29.99, samples=18 00:24:21.426 lat (msec) : 10=0.28%, 20=9.94%, 50=30.36%, 100=38.64%, 250=18.77% 00:24:21.426 lat (msec) : 500=2.02% 00:24:21.426 cpu : usr=0.36%, sys=0.31%, ctx=1815, majf=0, minf=3 00:24:21.426 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 issued rwts: total=480,607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.426 job44: (groupid=0, jobs=1): err= 0: pid=71721: Mon Jul 22 17:02:22 2024 00:24:21.426 read: IOPS=56, BW=7293KiB/s (7468kB/s)(60.0MiB/8425msec) 00:24:21.426 slat (usec): min=7, max=1275, avg=66.08, stdev=151.36 00:24:21.426 clat (msec): min=7, max=102, avg=23.46, stdev=12.42 00:24:21.426 lat (msec): min=9, max=102, avg=23.52, stdev=12.40 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:24:21.426 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 22], 00:24:21.426 | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 47], 00:24:21.426 | 99.00th=[ 90], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 103], 00:24:21.426 | 99.99th=[ 103] 00:24:21.426 write: IOPS=72, BW=9311KiB/s (9534kB/s)(78.6MiB/8647msec); 0 zone resets 00:24:21.426 slat (usec): min=39, max=1362, avg=115.94, stdev=129.86 00:24:21.426 clat (msec): min=35, max=534, avg=109.19, stdev=63.80 00:24:21.426 lat (msec): min=36, max=534, avg=109.30, stdev=63.80 00:24:21.426 clat percentiles (msec): 00:24:21.426 | 1.00th=[ 46], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:24:21.426 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:24:21.426 | 70.00th=[ 105], 80.00th=[ 121], 90.00th=[ 163], 95.00th=[ 226], 00:24:21.426 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 535], 99.95th=[ 535], 00:24:21.426 | 99.99th=[ 535] 00:24:21.426 bw ( KiB/s): min= 512, max=13568, per=0.88%, avg=8363.89, stdev=4049.29, samples=19 00:24:21.426 iops : min= 4, max= 106, avg=65.16, stdev=31.70, samples=19 00:24:21.426 lat (msec) : 10=0.09%, 20=21.28%, 50=20.83%, 100=37.69%, 250=17.94% 00:24:21.426 lat (msec) : 500=1.98%, 750=0.18% 00:24:21.426 cpu : usr=0.45%, sys=0.24%, ctx=1703, majf=0, minf=3 00:24:21.426 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.426 issued rwts: total=480,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.426 job45: (groupid=0, jobs=1): err= 0: pid=71722: Mon Jul 22 17:02:22 2024 00:24:21.426 read: IOPS=57, BW=7416KiB/s (7594kB/s)(61.0MiB/8423msec) 00:24:21.426 slat (usec): min=7, max=2112, avg=71.73, stdev=156.88 00:24:21.426 clat (usec): min=7583, max=74868, avg=20685.88, stdev=11638.84 00:24:21.426 lat (usec): min=7671, max=74888, avg=20757.61, stdev=11656.38 00:24:21.426 clat percentiles (usec): 00:24:21.426 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[11600], 00:24:21.426 | 30.00th=[13829], 40.00th=[15401], 50.00th=[17433], 60.00th=[20055], 00:24:21.427 | 70.00th=[23462], 80.00th=[26346], 90.00th=[34341], 95.00th=[50070], 00:24:21.427 | 99.00th=[63701], 99.50th=[68682], 99.90th=[74974], 99.95th=[74974], 00:24:21.427 | 99.99th=[74974] 00:24:21.427 write: IOPS=73, BW=9359KiB/s (9584kB/s)(80.0MiB/8753msec); 0 zone resets 00:24:21.427 slat (usec): min=40, max=1533, avg=129.97, stdev=155.75 00:24:21.427 clat (msec): min=10, max=349, avg=107.85, stdev=52.36 00:24:21.427 lat (msec): min=10, max=349, avg=107.98, stdev=52.37 00:24:21.427 clat percentiles (msec): 00:24:21.427 | 1.00th=[ 19], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.427 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 93], 00:24:21.427 | 70.00th=[ 103], 80.00th=[ 126], 90.00th=[ 199], 95.00th=[ 226], 00:24:21.427 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:24:21.427 | 99.99th=[ 351] 00:24:21.427 bw ( KiB/s): min= 2048, max=13595, per=0.86%, avg=8190.65, stdev=4086.13, samples=20 00:24:21.427 iops : min= 16, max= 106, avg=63.85, stdev=31.87, samples=20 00:24:21.427 lat (msec) : 10=5.23%, 20=21.37%, 50=15.16%, 100=39.98%, 250=16.84% 00:24:21.427 lat (msec) : 500=1.42% 00:24:21.427 cpu : usr=0.31%, sys=0.37%, ctx=1810, majf=0, minf=3 00:24:21.427 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 issued rwts: total=488,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.427 job46: (groupid=0, jobs=1): err= 0: pid=71723: Mon Jul 22 17:02:22 2024 00:24:21.427 read: IOPS=60, BW=7768KiB/s (7955kB/s)(60.0MiB/7909msec) 00:24:21.427 slat (usec): min=6, max=1561, avg=67.12, stdev=133.76 00:24:21.427 clat (msec): min=7, max=101, avg=24.59, stdev=15.38 00:24:21.427 lat (msec): min=7, max=101, avg=24.66, stdev=15.38 00:24:21.427 clat percentiles (msec): 00:24:21.427 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:24:21.427 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 24], 00:24:21.427 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 42], 95.00th=[ 56], 00:24:21.427 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:24:21.427 | 99.99th=[ 102] 00:24:21.427 write: IOPS=65, BW=8341KiB/s (8542kB/s)(69.9MiB/8578msec); 0 zone resets 00:24:21.427 slat (usec): min=31, max=3665, avg=131.56, stdev=225.45 00:24:21.427 clat (msec): min=53, max=503, avg=121.41, stdev=65.48 00:24:21.427 lat (msec): min=53, max=503, avg=121.54, stdev=65.51 00:24:21.427 clat percentiles (msec): 00:24:21.427 | 1.00th=[ 59], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 79], 00:24:21.427 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 106], 00:24:21.427 | 70.00th=[ 122], 80.00th=[ 157], 90.00th=[ 209], 95.00th=[ 249], 00:24:21.427 | 99.00th=[ 380], 99.50th=[ 472], 99.90th=[ 502], 99.95th=[ 502], 00:24:21.427 | 99.99th=[ 502] 00:24:21.427 bw ( KiB/s): min= 1536, max=12544, per=0.74%, avg=7064.65, stdev=4130.17, samples=20 00:24:21.427 iops : min= 12, max= 98, avg=55.10, stdev=32.24, samples=20 00:24:21.427 lat (msec) : 10=1.44%, 20=20.69%, 50=21.27%, 100=31.95%, 250=22.04% 00:24:21.427 lat (msec) : 500=2.50%, 750=0.10% 00:24:21.427 cpu : usr=0.40%, sys=0.22%, ctx=1708, majf=0, minf=5 00:24:21.427 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 issued rwts: total=480,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.427 job47: (groupid=0, jobs=1): err= 0: pid=71724: Mon Jul 22 17:02:22 2024 00:24:21.427 read: IOPS=60, BW=7715KiB/s (7900kB/s)(60.0MiB/7964msec) 00:24:21.427 slat (usec): min=8, max=1878, avg=79.83, stdev=163.26 00:24:21.427 clat (usec): min=5910, max=90011, avg=20098.70, stdev=12536.42 00:24:21.427 lat (usec): min=5933, max=90022, avg=20178.53, stdev=12544.92 00:24:21.427 clat percentiles (usec): 00:24:21.427 | 1.00th=[ 6587], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[10421], 00:24:21.427 | 30.00th=[11863], 40.00th=[14353], 50.00th=[16909], 60.00th=[20317], 00:24:21.427 | 70.00th=[23987], 80.00th=[27919], 90.00th=[32113], 95.00th=[42206], 00:24:21.427 | 99.00th=[79168], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:24:21.427 | 99.99th=[89654] 00:24:21.427 write: IOPS=57, BW=7404KiB/s (7581kB/s)(64.0MiB/8852msec); 0 zone resets 00:24:21.427 slat (usec): min=41, max=3046, avg=134.03, stdev=198.99 00:24:21.427 clat (msec): min=47, max=548, avg=137.05, stdev=76.62 00:24:21.427 lat (msec): min=47, max=548, avg=137.19, stdev=76.64 00:24:21.427 clat percentiles (msec): 00:24:21.427 | 1.00th=[ 53], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 81], 00:24:21.427 | 30.00th=[ 87], 40.00th=[ 95], 50.00th=[ 107], 60.00th=[ 127], 00:24:21.427 | 70.00th=[ 150], 80.00th=[ 192], 90.00th=[ 247], 95.00th=[ 288], 00:24:21.427 | 99.00th=[ 401], 99.50th=[ 502], 99.90th=[ 550], 99.95th=[ 550], 00:24:21.427 | 99.99th=[ 550] 00:24:21.427 bw ( KiB/s): min= 1024, max=12544, per=0.68%, avg=6461.95, stdev=3832.67, samples=20 00:24:21.427 iops : min= 8, max= 98, avg=50.30, stdev=29.88, samples=20 00:24:21.427 lat (msec) : 10=8.06%, 20=20.26%, 50=19.25%, 100=24.60%, 250=23.49% 00:24:21.427 lat (msec) : 500=4.03%, 750=0.30% 00:24:21.427 cpu : usr=0.37%, sys=0.21%, ctx=1683, majf=0, minf=3 00:24:21.427 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 issued rwts: total=480,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.427 job48: (groupid=0, jobs=1): err= 0: pid=71725: Mon Jul 22 17:02:22 2024 00:24:21.427 read: IOPS=58, BW=7517KiB/s (7697kB/s)(60.0MiB/8174msec) 00:24:21.427 slat (usec): min=7, max=2750, avg=107.31, stdev=248.41 00:24:21.427 clat (usec): min=11397, max=72424, avg=27418.93, stdev=11179.52 00:24:21.427 lat (usec): min=11790, max=72443, avg=27526.25, stdev=11165.93 00:24:21.427 clat percentiles (usec): 00:24:21.427 | 1.00th=[12387], 5.00th=[13566], 10.00th=[15008], 20.00th=[20055], 00:24:21.427 | 30.00th=[22152], 40.00th=[23725], 50.00th=[24773], 60.00th=[26346], 00:24:21.427 | 70.00th=[28181], 80.00th=[32900], 90.00th=[42206], 95.00th=[53216], 00:24:21.427 | 99.00th=[70779], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:24:21.427 | 99.99th=[72877] 00:24:21.427 write: IOPS=73, BW=9461KiB/s (9688kB/s)(77.6MiB/8402msec); 0 zone resets 00:24:21.427 slat (usec): min=38, max=2079, avg=141.11, stdev=212.15 00:24:21.427 clat (msec): min=20, max=418, avg=107.01, stdev=54.37 00:24:21.427 lat (msec): min=20, max=419, avg=107.15, stdev=54.38 00:24:21.427 clat percentiles (msec): 00:24:21.427 | 1.00th=[ 29], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.427 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 97], 00:24:21.427 | 70.00th=[ 106], 80.00th=[ 116], 90.00th=[ 148], 95.00th=[ 213], 00:24:21.427 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 418], 00:24:21.427 | 99.99th=[ 418] 00:24:21.427 bw ( KiB/s): min= 2299, max=12800, per=0.92%, avg=8715.83, stdev=3616.40, samples=18 00:24:21.427 iops : min= 17, max= 100, avg=67.89, stdev=28.42, samples=18 00:24:21.427 lat (msec) : 20=8.45%, 50=33.06%, 100=38.69%, 250=17.71%, 500=2.09% 00:24:21.427 cpu : usr=0.41%, sys=0.27%, ctx=1853, majf=0, minf=3 00:24:21.427 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.427 issued rwts: total=480,621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.427 job49: (groupid=0, jobs=1): err= 0: pid=71726: Mon Jul 22 17:02:22 2024 00:24:21.427 read: IOPS=59, BW=7668KiB/s (7853kB/s)(60.0MiB/8012msec) 00:24:21.427 slat (usec): min=8, max=4088, avg=80.55, stdev=218.32 00:24:21.427 clat (usec): min=14234, max=99064, avg=27493.38, stdev=12811.64 00:24:21.427 lat (usec): min=14252, max=99078, avg=27573.93, stdev=12796.97 00:24:21.427 clat percentiles (usec): 00:24:21.427 | 1.00th=[15664], 5.00th=[16712], 10.00th=[17433], 20.00th=[18744], 00:24:21.427 | 30.00th=[20317], 40.00th=[21627], 50.00th=[23725], 60.00th=[25297], 00:24:21.427 | 70.00th=[28181], 80.00th=[33817], 90.00th=[40633], 95.00th=[52691], 00:24:21.427 | 99.00th=[83362], 99.50th=[85459], 99.90th=[99091], 99.95th=[99091], 00:24:21.427 | 99.99th=[99091] 00:24:21.427 write: IOPS=73, BW=9457KiB/s (9684kB/s)(77.6MiB/8405msec); 0 zone resets 00:24:21.427 slat (usec): min=39, max=1667, avg=139.60, stdev=201.22 00:24:21.428 clat (msec): min=30, max=424, avg=106.92, stdev=49.74 00:24:21.428 lat (msec): min=31, max=424, avg=107.06, stdev=49.76 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 40], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.428 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 100], 00:24:21.428 | 70.00th=[ 106], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 199], 00:24:21.428 | 99.00th=[ 363], 99.50th=[ 372], 99.90th=[ 426], 99.95th=[ 426], 00:24:21.428 | 99.99th=[ 426] 00:24:21.428 bw ( KiB/s): min= 1792, max=12544, per=0.87%, avg=8269.47, stdev=3976.05, samples=19 00:24:21.428 iops : min= 14, max= 98, avg=64.42, stdev=31.09, samples=19 00:24:21.428 lat (msec) : 20=12.35%, 50=29.06%, 100=37.60%, 250=19.53%, 500=1.45% 00:24:21.428 cpu : usr=0.41%, sys=0.27%, ctx=1831, majf=0, minf=5 00:24:21.428 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 issued rwts: total=480,621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.428 job50: (groupid=0, jobs=1): err= 0: pid=71727: Mon Jul 22 17:02:22 2024 00:24:21.428 read: IOPS=77, BW=9920KiB/s (10.2MB/s)(80.0MiB/8258msec) 00:24:21.428 slat (usec): min=6, max=1090, avg=62.17, stdev=128.15 00:24:21.428 clat (msec): min=2, max=123, avg=17.13, stdev=18.47 00:24:21.428 lat (msec): min=3, max=123, avg=17.19, stdev=18.47 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:24:21.428 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:24:21.428 | 70.00th=[ 17], 80.00th=[ 24], 90.00th=[ 29], 95.00th=[ 50], 00:24:21.428 | 99.00th=[ 106], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 124], 00:24:21.428 | 99.99th=[ 124] 00:24:21.428 write: IOPS=90, BW=11.4MiB/s (11.9MB/s)(98.2MiB/8655msec); 0 zone resets 00:24:21.428 slat (usec): min=38, max=3706, avg=146.28, stdev=255.82 00:24:21.428 clat (msec): min=46, max=320, avg=87.28, stdev=40.20 00:24:21.428 lat (msec): min=47, max=320, avg=87.43, stdev=40.22 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 59], 00:24:21.428 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 80], 00:24:21.428 | 70.00th=[ 95], 80.00th=[ 113], 90.00th=[ 144], 95.00th=[ 176], 00:24:21.428 | 99.00th=[ 228], 99.50th=[ 247], 99.90th=[ 321], 99.95th=[ 321], 00:24:21.428 | 99.99th=[ 321] 00:24:21.428 bw ( KiB/s): min= 256, max=18139, per=1.05%, avg=9965.70, stdev=4893.01, samples=20 00:24:21.428 iops : min= 2, max= 141, avg=77.60, stdev=38.32, samples=20 00:24:21.428 lat (msec) : 4=0.07%, 10=19.42%, 20=13.18%, 50=11.85%, 100=40.11% 00:24:21.428 lat (msec) : 250=15.15%, 500=0.21% 00:24:21.428 cpu : usr=0.56%, sys=0.29%, ctx=2344, majf=0, minf=3 00:24:21.428 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 issued rwts: total=640,786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.428 job51: (groupid=0, jobs=1): err= 0: pid=71728: Mon Jul 22 17:02:22 2024 00:24:21.428 read: IOPS=80, BW=10.1MiB/s (10.5MB/s)(80.0MiB/7953msec) 00:24:21.428 slat (usec): min=5, max=2614, avg=51.91, stdev=127.61 00:24:21.428 clat (msec): min=3, max=225, avg=23.49, stdev=39.14 00:24:21.428 lat (msec): min=3, max=225, avg=23.55, stdev=39.15 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:24:21.428 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.428 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 42], 95.00th=[ 84], 00:24:21.428 | 99.00th=[ 220], 99.50th=[ 222], 99.90th=[ 226], 99.95th=[ 226], 00:24:21.428 | 99.99th=[ 226] 00:24:21.428 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(92.0MiB/8147msec); 0 zone resets 00:24:21.428 slat (usec): min=32, max=2372, avg=139.46, stdev=208.74 00:24:21.428 clat (msec): min=45, max=449, avg=88.06, stdev=47.29 00:24:21.428 lat (msec): min=45, max=449, avg=88.20, stdev=47.28 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 57], 00:24:21.428 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 73], 60.00th=[ 82], 00:24:21.428 | 70.00th=[ 93], 80.00th=[ 113], 90.00th=[ 140], 95.00th=[ 167], 00:24:21.428 | 99.00th=[ 288], 99.50th=[ 388], 99.90th=[ 451], 99.95th=[ 451], 00:24:21.428 | 99.99th=[ 451] 00:24:21.428 bw ( KiB/s): min= 1021, max=17152, per=1.03%, avg=9815.47, stdev=5044.24, samples=19 00:24:21.428 iops : min= 7, max= 134, avg=76.47, stdev=39.56, samples=19 00:24:21.428 lat (msec) : 4=0.07%, 10=14.10%, 20=24.27%, 50=6.25%, 100=39.24% 00:24:21.428 lat (msec) : 250=15.26%, 500=0.80% 00:24:21.428 cpu : usr=0.48%, sys=0.29%, ctx=2317, majf=0, minf=5 00:24:21.428 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 issued rwts: total=640,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.428 job52: (groupid=0, jobs=1): err= 0: pid=71729: Mon Jul 22 17:02:22 2024 00:24:21.428 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8967msec) 00:24:21.428 slat (usec): min=5, max=2339, avg=55.04, stdev=128.69 00:24:21.428 clat (msec): min=3, max=177, avg=15.89, stdev=19.03 00:24:21.428 lat (msec): min=3, max=177, avg=15.94, stdev=19.02 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:24:21.428 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:24:21.428 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 27], 95.00th=[ 44], 00:24:21.428 | 99.00th=[ 72], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:24:21.428 | 99.99th=[ 178] 00:24:21.428 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(110MiB/8448msec); 0 zone resets 00:24:21.428 slat (usec): min=33, max=6155, avg=136.73, stdev=285.87 00:24:21.428 clat (msec): min=3, max=291, avg=76.12, stdev=35.88 00:24:21.428 lat (msec): min=3, max=292, avg=76.26, stdev=35.90 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 10], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:24:21.428 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 00:24:21.428 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 118], 95.00th=[ 159], 00:24:21.428 | 99.00th=[ 207], 99.50th=[ 224], 99.90th=[ 292], 99.95th=[ 292], 00:24:21.428 | 99.99th=[ 292] 00:24:21.428 bw ( KiB/s): min= 1536, max=20224, per=1.24%, avg=11762.53, stdev=5526.59, samples=19 00:24:21.428 iops : min= 12, max= 158, avg=91.89, stdev=43.18, samples=19 00:24:21.428 lat (msec) : 4=0.30%, 10=20.24%, 20=21.67%, 50=8.99%, 100=40.54% 00:24:21.428 lat (msec) : 250=8.10%, 500=0.18% 00:24:21.428 cpu : usr=0.63%, sys=0.38%, ctx=2583, majf=0, minf=1 00:24:21.428 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 issued rwts: total=800,880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.428 job53: (groupid=0, jobs=1): err= 0: pid=71730: Mon Jul 22 17:02:22 2024 00:24:21.428 read: IOPS=77, BW=9944KiB/s (10.2MB/s)(80.0MiB/8238msec) 00:24:21.428 slat (usec): min=7, max=1230, avg=69.57, stdev=131.43 00:24:21.428 clat (msec): min=2, max=118, avg=16.12, stdev=14.28 00:24:21.428 lat (msec): min=3, max=118, avg=16.19, stdev=14.29 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.428 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:24:21.428 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 32], 00:24:21.428 | 99.00th=[ 93], 99.50th=[ 107], 99.90th=[ 120], 99.95th=[ 120], 00:24:21.428 | 99.99th=[ 120] 00:24:21.428 write: IOPS=85, BW=10.7MiB/s (11.3MB/s)(93.8MiB/8734msec); 0 zone resets 00:24:21.428 slat (usec): min=38, max=5252, avg=144.20, stdev=270.43 00:24:21.428 clat (msec): min=32, max=316, avg=92.47, stdev=44.61 00:24:21.428 lat (msec): min=32, max=316, avg=92.62, stdev=44.61 00:24:21.428 clat percentiles (msec): 00:24:21.428 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 59], 00:24:21.428 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 87], 00:24:21.428 | 70.00th=[ 105], 80.00th=[ 124], 90.00th=[ 146], 95.00th=[ 182], 00:24:21.428 | 99.00th=[ 245], 99.50th=[ 288], 99.90th=[ 317], 99.95th=[ 317], 00:24:21.428 | 99.99th=[ 317] 00:24:21.428 bw ( KiB/s): min= 768, max=17152, per=1.00%, avg=9507.80, stdev=4792.63, samples=20 00:24:21.428 iops : min= 6, max= 134, avg=74.15, stdev=37.49, samples=20 00:24:21.428 lat (msec) : 4=0.14%, 10=12.09%, 20=26.26%, 50=8.78%, 100=34.60% 00:24:21.428 lat (msec) : 250=17.70%, 500=0.43% 00:24:21.428 cpu : usr=0.47%, sys=0.39%, ctx=2277, majf=0, minf=4 00:24:21.428 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.428 issued rwts: total=640,750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.428 job54: (groupid=0, jobs=1): err= 0: pid=71731: Mon Jul 22 17:02:22 2024 00:24:21.428 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8780msec) 00:24:21.428 slat (usec): min=7, max=1504, avg=59.73, stdev=136.73 00:24:21.428 clat (usec): min=5984, max=71180, avg=15218.87, stdev=7889.89 00:24:21.428 lat (usec): min=6095, max=71193, avg=15278.60, stdev=7892.07 00:24:21.428 clat percentiles (usec): 00:24:21.428 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 8094], 20.00th=[ 9765], 00:24:21.428 | 30.00th=[10814], 40.00th=[12125], 50.00th=[13173], 60.00th=[15139], 00:24:21.428 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21890], 95.00th=[28705], 00:24:21.428 | 99.00th=[48497], 99.50th=[64750], 99.90th=[70779], 99.95th=[70779], 00:24:21.428 | 99.99th=[70779] 00:24:21.428 write: IOPS=101, BW=12.7MiB/s (13.3MB/s)(108MiB/8507msec); 0 zone resets 00:24:21.428 slat (usec): min=40, max=2604, avg=135.49, stdev=218.10 00:24:21.428 clat (msec): min=9, max=236, avg=78.17, stdev=36.00 00:24:21.428 lat (msec): min=9, max=236, avg=78.31, stdev=36.02 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 12], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:24:21.429 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:24:21.429 | 70.00th=[ 79], 80.00th=[ 95], 90.00th=[ 129], 95.00th=[ 159], 00:24:21.429 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 236], 99.95th=[ 236], 00:24:21.429 | 99.99th=[ 236] 00:24:21.429 bw ( KiB/s): min= 2816, max=18432, per=1.15%, avg=10940.95, stdev=5300.50, samples=20 00:24:21.429 iops : min= 22, max= 144, avg=85.35, stdev=41.44, samples=20 00:24:21.429 lat (msec) : 10=10.41%, 20=31.53%, 50=9.81%, 100=38.99%, 250=9.27% 00:24:21.429 cpu : usr=0.62%, sys=0.37%, ctx=2678, majf=0, minf=3 00:24:21.429 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 issued rwts: total=800,862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.429 job55: (groupid=0, jobs=1): err= 0: pid=71732: Mon Jul 22 17:02:22 2024 00:24:21.429 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8741msec) 00:24:21.429 slat (usec): min=7, max=1700, avg=54.17, stdev=120.11 00:24:21.429 clat (usec): min=4634, max=55058, avg=14802.87, stdev=8578.57 00:24:21.429 lat (usec): min=4657, max=55077, avg=14857.04, stdev=8585.72 00:24:21.429 clat percentiles (usec): 00:24:21.429 | 1.00th=[ 5276], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 8979], 00:24:21.429 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11863], 60.00th=[13435], 00:24:21.429 | 70.00th=[15401], 80.00th=[19268], 90.00th=[26608], 95.00th=[33817], 00:24:21.429 | 99.00th=[45876], 99.50th=[46400], 99.90th=[55313], 99.95th=[55313], 00:24:21.429 | 99.99th=[55313] 00:24:21.429 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(108MiB/8567msec); 0 zone resets 00:24:21.429 slat (usec): min=31, max=14748, avg=151.17, stdev=544.11 00:24:21.429 clat (msec): min=26, max=301, avg=78.36, stdev=35.38 00:24:21.429 lat (msec): min=27, max=302, avg=78.51, stdev=35.37 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 51], 20.00th=[ 55], 00:24:21.429 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 72], 00:24:21.429 | 70.00th=[ 82], 80.00th=[ 97], 90.00th=[ 123], 95.00th=[ 148], 00:24:21.429 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 300], 99.95th=[ 300], 00:24:21.429 | 99.99th=[ 300] 00:24:21.429 bw ( KiB/s): min= 3328, max=18944, per=1.15%, avg=10969.25, stdev=5180.37, samples=20 00:24:21.429 iops : min= 26, max= 148, avg=85.60, stdev=40.50, samples=20 00:24:21.429 lat (msec) : 10=14.12%, 20=25.18%, 50=11.96%, 100=39.36%, 250=9.01% 00:24:21.429 lat (msec) : 500=0.36% 00:24:21.429 cpu : usr=0.66%, sys=0.30%, ctx=2683, majf=0, minf=3 00:24:21.429 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 issued rwts: total=800,864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.429 job56: (groupid=0, jobs=1): err= 0: pid=71737: Mon Jul 22 17:02:22 2024 00:24:21.429 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8265msec) 00:24:21.429 slat (usec): min=6, max=1045, avg=44.67, stdev=83.99 00:24:21.429 clat (msec): min=4, max=127, avg=14.61, stdev=16.21 00:24:21.429 lat (msec): min=4, max=127, avg=14.65, stdev=16.20 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:24:21.429 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:24:21.429 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 26], 00:24:21.429 | 99.00th=[ 113], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:24:21.429 | 99.99th=[ 128] 00:24:21.429 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(101MiB/8573msec); 0 zone resets 00:24:21.429 slat (usec): min=32, max=4491, avg=129.72, stdev=230.23 00:24:21.429 clat (msec): min=39, max=328, avg=84.07, stdev=36.46 00:24:21.429 lat (msec): min=39, max=328, avg=84.20, stdev=36.46 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 57], 00:24:21.429 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 83], 00:24:21.429 | 70.00th=[ 91], 80.00th=[ 107], 90.00th=[ 136], 95.00th=[ 161], 00:24:21.429 | 99.00th=[ 224], 99.50th=[ 239], 99.90th=[ 330], 99.95th=[ 330], 00:24:21.429 | 99.99th=[ 330] 00:24:21.429 bw ( KiB/s): min= 2810, max=16640, per=1.08%, avg=10239.70, stdev=3949.82, samples=20 00:24:21.429 iops : min= 21, max= 130, avg=79.85, stdev=31.00, samples=20 00:24:21.429 lat (msec) : 10=19.71%, 20=24.00%, 50=7.15%, 100=36.75%, 250=12.25% 00:24:21.429 lat (msec) : 500=0.12% 00:24:21.429 cpu : usr=0.66%, sys=0.25%, ctx=2580, majf=0, minf=5 00:24:21.429 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.429 job57: (groupid=0, jobs=1): err= 0: pid=71738: Mon Jul 22 17:02:22 2024 00:24:21.429 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8707msec) 00:24:21.429 slat (usec): min=7, max=1684, avg=67.24, stdev=143.84 00:24:21.429 clat (msec): min=5, max=161, avg=16.10, stdev=15.33 00:24:21.429 lat (msec): min=5, max=161, avg=16.17, stdev=15.34 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:24:21.429 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.429 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 31], 95.00th=[ 39], 00:24:21.429 | 99.00th=[ 66], 99.50th=[ 127], 99.90th=[ 161], 99.95th=[ 161], 00:24:21.429 | 99.99th=[ 161] 00:24:21.429 write: IOPS=99, BW=12.5MiB/s (13.1MB/s)(105MiB/8421msec); 0 zone resets 00:24:21.429 slat (usec): min=41, max=2720, avg=129.93, stdev=186.09 00:24:21.429 clat (msec): min=21, max=296, avg=79.10, stdev=40.57 00:24:21.429 lat (msec): min=21, max=296, avg=79.23, stdev=40.58 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 55], 00:24:21.429 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 69], 00:24:21.429 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 132], 95.00th=[ 165], 00:24:21.429 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 296], 99.95th=[ 296], 00:24:21.429 | 99.99th=[ 296] 00:24:21.429 bw ( KiB/s): min= 512, max=17664, per=1.12%, avg=10671.65, stdev=5541.66, samples=20 00:24:21.429 iops : min= 4, max= 138, avg=83.20, stdev=43.26, samples=20 00:24:21.429 lat (msec) : 10=13.65%, 20=27.30%, 50=8.96%, 100=40.89%, 250=8.53% 00:24:21.429 lat (msec) : 500=0.67% 00:24:21.429 cpu : usr=0.66%, sys=0.33%, ctx=2666, majf=0, minf=1 00:24:21.429 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 issued rwts: total=800,841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.429 job58: (groupid=0, jobs=1): err= 0: pid=71744: Mon Jul 22 17:02:22 2024 00:24:21.429 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8771msec) 00:24:21.429 slat (usec): min=6, max=1439, avg=61.95, stdev=121.32 00:24:21.429 clat (usec): min=4312, max=33303, avg=14230.73, stdev=6184.28 00:24:21.429 lat (usec): min=4326, max=34241, avg=14292.67, stdev=6180.18 00:24:21.429 clat percentiles (usec): 00:24:21.429 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8455], 00:24:21.429 | 30.00th=[10552], 40.00th=[11731], 50.00th=[13042], 60.00th=[14615], 00:24:21.429 | 70.00th=[16188], 80.00th=[19792], 90.00th=[22676], 95.00th=[27132], 00:24:21.429 | 99.00th=[30802], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:24:21.429 | 99.99th=[33424] 00:24:21.429 write: IOPS=100, BW=12.5MiB/s (13.1MB/s)(108MiB/8604msec); 0 zone resets 00:24:21.429 slat (usec): min=41, max=2949, avg=142.65, stdev=222.22 00:24:21.429 clat (msec): min=44, max=290, avg=78.78, stdev=35.77 00:24:21.429 lat (msec): min=44, max=290, avg=78.92, stdev=35.77 00:24:21.429 clat percentiles (msec): 00:24:21.429 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 54], 00:24:21.429 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.429 | 70.00th=[ 80], 80.00th=[ 99], 90.00th=[ 132], 95.00th=[ 159], 00:24:21.429 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 292], 99.95th=[ 292], 00:24:21.429 | 99.99th=[ 292] 00:24:21.429 bw ( KiB/s): min= 3072, max=17664, per=1.15%, avg=10956.15, stdev=5205.07, samples=20 00:24:21.429 iops : min= 24, max= 138, avg=85.50, stdev=40.68, samples=20 00:24:21.429 lat (msec) : 10=13.29%, 20=25.56%, 50=12.03%, 100=39.63%, 250=9.44% 00:24:21.429 lat (msec) : 500=0.06% 00:24:21.429 cpu : usr=0.64%, sys=0.38%, ctx=2772, majf=0, minf=8 00:24:21.429 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.429 issued rwts: total=800,863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.429 job59: (groupid=0, jobs=1): err= 0: pid=71747: Mon Jul 22 17:02:22 2024 00:24:21.429 read: IOPS=95, BW=12.0MiB/s (12.6MB/s)(100MiB/8340msec) 00:24:21.429 slat (usec): min=6, max=2072, avg=62.81, stdev=143.39 00:24:21.429 clat (usec): min=3329, max=91230, avg=14651.81, stdev=10104.15 00:24:21.429 lat (usec): min=3425, max=91246, avg=14714.62, stdev=10112.08 00:24:21.429 clat percentiles (usec): 00:24:21.429 | 1.00th=[ 4948], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 9110], 00:24:21.429 | 30.00th=[10159], 40.00th=[11207], 50.00th=[12256], 60.00th=[13042], 00:24:21.429 | 70.00th=[14615], 80.00th=[16909], 90.00th=[22152], 95.00th=[31851], 00:24:21.429 | 99.00th=[60556], 99.50th=[78119], 99.90th=[91751], 99.95th=[91751], 00:24:21.429 | 99.99th=[91751] 00:24:21.429 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(102MiB/8559msec); 0 zone resets 00:24:21.429 slat (usec): min=31, max=19667, avg=155.92, stdev=712.80 00:24:21.429 clat (msec): min=44, max=317, avg=83.02, stdev=35.74 00:24:21.429 lat (msec): min=44, max=318, avg=83.18, stdev=35.75 00:24:21.429 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 57], 00:24:21.430 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 81], 00:24:21.430 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 136], 95.00th=[ 155], 00:24:21.430 | 99.00th=[ 199], 99.50th=[ 232], 99.90th=[ 317], 99.95th=[ 317], 00:24:21.430 | 99.99th=[ 317] 00:24:21.430 bw ( KiB/s): min= 1792, max=16896, per=1.09%, avg=10353.55, stdev=4668.02, samples=20 00:24:21.430 iops : min= 14, max= 132, avg=80.80, stdev=36.48, samples=20 00:24:21.430 lat (msec) : 4=0.12%, 10=13.79%, 20=28.32%, 50=8.29%, 100=37.60% 00:24:21.430 lat (msec) : 250=11.69%, 500=0.19% 00:24:21.430 cpu : usr=0.65%, sys=0.29%, ctx=2683, majf=0, minf=5 00:24:21.430 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 issued rwts: total=800,817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.430 job60: (groupid=0, jobs=1): err= 0: pid=71748: Mon Jul 22 17:02:22 2024 00:24:21.430 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8950msec) 00:24:21.430 slat (usec): min=8, max=1985, avg=66.23, stdev=157.68 00:24:21.430 clat (usec): min=4787, max=62448, avg=14287.97, stdev=7845.35 00:24:21.430 lat (usec): min=4815, max=62465, avg=14354.19, stdev=7838.07 00:24:21.430 clat percentiles (usec): 00:24:21.430 | 1.00th=[ 5669], 5.00th=[ 6915], 10.00th=[ 7963], 20.00th=[ 8848], 00:24:21.430 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[12256], 60.00th=[13173], 00:24:21.430 | 70.00th=[14746], 80.00th=[17433], 90.00th=[23200], 95.00th=[31065], 00:24:21.430 | 99.00th=[44303], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:24:21.430 | 99.99th=[62653] 00:24:21.430 write: IOPS=105, BW=13.2MiB/s (13.9MB/s)(114MiB/8607msec); 0 zone resets 00:24:21.430 slat (usec): min=31, max=3577, avg=142.45, stdev=259.18 00:24:21.430 clat (msec): min=31, max=238, avg=74.52, stdev=26.47 00:24:21.430 lat (msec): min=31, max=238, avg=74.66, stdev=26.50 00:24:21.430 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 55], 00:24:21.430 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 74], 00:24:21.430 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 117], 00:24:21.430 | 99.00th=[ 188], 99.50th=[ 207], 99.90th=[ 239], 99.95th=[ 239], 00:24:21.430 | 99.99th=[ 239] 00:24:21.430 bw ( KiB/s): min= 1792, max=19417, per=1.22%, avg=11565.65, stdev=5178.62, samples=20 00:24:21.430 iops : min= 14, max= 151, avg=90.20, stdev=40.41, samples=20 00:24:21.430 lat (msec) : 10=14.20%, 20=25.37%, 50=12.62%, 100=42.26%, 250=5.55% 00:24:21.430 cpu : usr=0.60%, sys=0.44%, ctx=2679, majf=0, minf=1 00:24:21.430 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 issued rwts: total=800,911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.430 job61: (groupid=0, jobs=1): err= 0: pid=71749: Mon Jul 22 17:02:22 2024 00:24:21.430 read: IOPS=93, BW=11.7MiB/s (12.2MB/s)(100MiB/8564msec) 00:24:21.430 slat (usec): min=6, max=2237, avg=69.57, stdev=153.05 00:24:21.430 clat (usec): min=3668, max=52583, avg=13185.05, stdev=6437.73 00:24:21.430 lat (usec): min=3823, max=52591, avg=13254.62, stdev=6445.62 00:24:21.430 clat percentiles (usec): 00:24:21.430 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 7111], 20.00th=[ 8029], 00:24:21.430 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11994], 60.00th=[13173], 00:24:21.430 | 70.00th=[14484], 80.00th=[16450], 90.00th=[20841], 95.00th=[24249], 00:24:21.430 | 99.00th=[39584], 99.50th=[44827], 99.90th=[52691], 99.95th=[52691], 00:24:21.430 | 99.99th=[52691] 00:24:21.430 write: IOPS=101, BW=12.7MiB/s (13.3MB/s)(111MiB/8726msec); 0 zone resets 00:24:21.430 slat (usec): min=38, max=2345, avg=136.17, stdev=185.41 00:24:21.430 clat (msec): min=22, max=247, avg=78.13, stdev=30.31 00:24:21.430 lat (msec): min=23, max=248, avg=78.27, stdev=30.31 00:24:21.430 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 56], 00:24:21.430 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 75], 00:24:21.430 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 116], 95.00th=[ 144], 00:24:21.430 | 99.00th=[ 194], 99.50th=[ 201], 99.90th=[ 249], 99.95th=[ 249], 00:24:21.430 | 99.99th=[ 249] 00:24:21.430 bw ( KiB/s): min= 3832, max=18725, per=1.18%, avg=11211.40, stdev=4653.84, samples=20 00:24:21.430 iops : min= 29, max= 146, avg=87.40, stdev=36.48, samples=20 00:24:21.430 lat (msec) : 4=0.06%, 10=16.09%, 20=25.83%, 50=9.09%, 100=40.86% 00:24:21.430 lat (msec) : 250=8.08% 00:24:21.430 cpu : usr=0.56%, sys=0.42%, ctx=2779, majf=0, minf=7 00:24:21.430 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 issued rwts: total=800,884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.430 job62: (groupid=0, jobs=1): err= 0: pid=71750: Mon Jul 22 17:02:22 2024 00:24:21.430 read: IOPS=77, BW=9884KiB/s (10.1MB/s)(80.0MiB/8288msec) 00:24:21.430 slat (usec): min=7, max=2498, avg=60.81, stdev=137.88 00:24:21.430 clat (msec): min=3, max=255, avg=17.91, stdev=30.71 00:24:21.430 lat (msec): min=3, max=255, avg=17.97, stdev=30.71 00:24:21.430 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:24:21.430 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:24:21.430 | 70.00th=[ 13], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 62], 00:24:21.430 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 255], 99.95th=[ 255], 00:24:21.430 | 99.99th=[ 255] 00:24:21.430 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(96.9MiB/8601msec); 0 zone resets 00:24:21.430 slat (usec): min=37, max=4501, avg=154.20, stdev=321.87 00:24:21.430 clat (msec): min=44, max=247, avg=88.21, stdev=29.91 00:24:21.430 lat (msec): min=44, max=247, avg=88.37, stdev=29.92 00:24:21.430 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 49], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 64], 00:24:21.430 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 90], 00:24:21.430 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 128], 95.00th=[ 146], 00:24:21.430 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 249], 99.95th=[ 249], 00:24:21.430 | 99.99th=[ 249] 00:24:21.430 bw ( KiB/s): min= 768, max=16929, per=1.03%, avg=9824.00, stdev=4066.90, samples=20 00:24:21.430 iops : min= 6, max= 132, avg=76.55, stdev=31.75, samples=20 00:24:21.430 lat (msec) : 4=0.14%, 10=24.45%, 20=14.91%, 50=5.16%, 100=38.09% 00:24:21.430 lat (msec) : 250=17.17%, 500=0.07% 00:24:21.430 cpu : usr=0.50%, sys=0.33%, ctx=2329, majf=0, minf=1 00:24:21.430 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 issued rwts: total=640,775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.430 job63: (groupid=0, jobs=1): err= 0: pid=71751: Mon Jul 22 17:02:22 2024 00:24:21.430 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8489msec) 00:24:21.430 slat (usec): min=5, max=1767, avg=66.84, stdev=133.52 00:24:21.430 clat (usec): min=5077, max=48474, avg=13139.79, stdev=5483.38 00:24:21.430 lat (usec): min=5118, max=48486, avg=13206.64, stdev=5481.67 00:24:21.430 clat percentiles (usec): 00:24:21.430 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 8094], 20.00th=[ 9241], 00:24:21.430 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[12780], 00:24:21.430 | 70.00th=[14222], 80.00th=[15533], 90.00th=[19792], 95.00th=[23462], 00:24:21.430 | 99.00th=[34866], 99.50th=[37487], 99.90th=[48497], 99.95th=[48497], 00:24:21.430 | 99.99th=[48497] 00:24:21.430 write: IOPS=102, BW=12.9MiB/s (13.5MB/s)(112MiB/8712msec); 0 zone resets 00:24:21.430 slat (usec): min=33, max=8394, avg=135.57, stdev=315.84 00:24:21.430 clat (msec): min=47, max=231, avg=76.93, stdev=27.69 00:24:21.430 lat (msec): min=48, max=232, avg=77.07, stdev=27.69 00:24:21.430 clat percentiles (msec): 00:24:21.430 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:24:21.430 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 74], 00:24:21.430 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 110], 95.00th=[ 144], 00:24:21.430 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 232], 99.95th=[ 232], 00:24:21.430 | 99.99th=[ 232] 00:24:21.430 bw ( KiB/s): min= 4352, max=17664, per=1.20%, avg=11384.55, stdev=4365.99, samples=20 00:24:21.430 iops : min= 34, max= 138, avg=88.75, stdev=34.14, samples=20 00:24:21.430 lat (msec) : 10=13.79%, 20=28.99%, 50=5.95%, 100=43.55%, 250=7.72% 00:24:21.430 cpu : usr=0.58%, sys=0.40%, ctx=2841, majf=0, minf=1 00:24:21.430 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.430 issued rwts: total=800,897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.430 job64: (groupid=0, jobs=1): err= 0: pid=71752: Mon Jul 22 17:02:22 2024 00:24:21.430 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(100MiB/8690msec) 00:24:21.430 slat (usec): min=7, max=1301, avg=51.00, stdev=100.45 00:24:21.430 clat (usec): min=4515, max=72714, avg=12974.01, stdev=7268.91 00:24:21.430 lat (usec): min=4525, max=72724, avg=13025.01, stdev=7271.90 00:24:21.430 clat percentiles (usec): 00:24:21.430 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 7373], 20.00th=[ 8586], 00:24:21.430 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11731], 60.00th=[12649], 00:24:21.430 | 70.00th=[13960], 80.00th=[15533], 90.00th=[19006], 95.00th=[22676], 00:24:21.430 | 99.00th=[40633], 99.50th=[63701], 99.90th=[72877], 99.95th=[72877], 00:24:21.430 | 99.99th=[72877] 00:24:21.431 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(110MiB/8732msec); 0 zone resets 00:24:21.431 slat (usec): min=38, max=3007, avg=136.01, stdev=213.44 00:24:21.431 clat (msec): min=43, max=283, avg=78.80, stdev=33.72 00:24:21.431 lat (msec): min=43, max=283, avg=78.93, stdev=33.72 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 56], 00:24:21.431 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 75], 00:24:21.431 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 113], 95.00th=[ 148], 00:24:21.431 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 284], 99.95th=[ 284], 00:24:21.431 | 99.99th=[ 284] 00:24:21.431 bw ( KiB/s): min= 2043, max=17955, per=1.17%, avg=11128.70, stdev=4893.87, samples=20 00:24:21.431 iops : min= 15, max= 140, avg=86.75, stdev=38.25, samples=20 00:24:21.431 lat (msec) : 10=15.86%, 20=27.61%, 50=6.68%, 100=41.62%, 250=7.81% 00:24:21.431 lat (msec) : 500=0.42% 00:24:21.431 cpu : usr=0.70%, sys=0.30%, ctx=2666, majf=0, minf=3 00:24:21.431 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 issued rwts: total=800,877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.431 job65: (groupid=0, jobs=1): err= 0: pid=71753: Mon Jul 22 17:02:22 2024 00:24:21.431 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8749msec) 00:24:21.431 slat (usec): min=7, max=1364, avg=64.40, stdev=134.79 00:24:21.431 clat (msec): min=5, max=134, avg=13.92, stdev=12.44 00:24:21.431 lat (msec): min=5, max=134, avg=13.99, stdev=12.43 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.431 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.431 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 24], 00:24:21.431 | 99.00th=[ 42], 99.50th=[ 128], 99.90th=[ 134], 99.95th=[ 134], 00:24:21.431 | 99.99th=[ 134] 00:24:21.431 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(112MiB/8631msec); 0 zone resets 00:24:21.431 slat (usec): min=37, max=2702, avg=135.98, stdev=220.35 00:24:21.431 clat (msec): min=10, max=276, avg=76.06, stdev=28.04 00:24:21.431 lat (msec): min=10, max=276, avg=76.20, stdev=28.06 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 25], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 57], 00:24:21.431 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:24:21.431 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 125], 00:24:21.431 | 99.00th=[ 180], 99.50th=[ 234], 99.90th=[ 275], 99.95th=[ 275], 00:24:21.431 | 99.99th=[ 275] 00:24:21.431 bw ( KiB/s): min= 3840, max=20777, per=1.20%, avg=11402.35, stdev=4494.50, samples=20 00:24:21.431 iops : min= 30, max= 162, avg=88.90, stdev=35.05, samples=20 00:24:21.431 lat (msec) : 10=16.31%, 20=26.56%, 50=5.89%, 100=44.52%, 250=6.54% 00:24:21.431 lat (msec) : 500=0.18% 00:24:21.431 cpu : usr=0.60%, sys=0.40%, ctx=2679, majf=0, minf=3 00:24:21.431 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 issued rwts: total=800,898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.431 job66: (groupid=0, jobs=1): err= 0: pid=71754: Mon Jul 22 17:02:22 2024 00:24:21.431 read: IOPS=93, BW=11.6MiB/s (12.2MB/s)(100MiB/8602msec) 00:24:21.431 slat (usec): min=7, max=1557, avg=60.50, stdev=125.27 00:24:21.431 clat (usec): min=4131, max=57366, avg=12430.75, stdev=7658.89 00:24:21.431 lat (usec): min=4306, max=57465, avg=12491.25, stdev=7663.23 00:24:21.431 clat percentiles (usec): 00:24:21.431 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 7373], 00:24:21.431 | 30.00th=[ 8455], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[11731], 00:24:21.431 | 70.00th=[12911], 80.00th=[14746], 90.00th=[20579], 95.00th=[26084], 00:24:21.431 | 99.00th=[49021], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:24:21.431 | 99.99th=[57410] 00:24:21.431 write: IOPS=93, BW=11.7MiB/s (12.3MB/s)(103MiB/8793msec); 0 zone resets 00:24:21.431 slat (usec): min=33, max=6187, avg=150.49, stdev=294.59 00:24:21.431 clat (msec): min=14, max=313, avg=84.42, stdev=35.63 00:24:21.431 lat (msec): min=14, max=313, avg=84.57, stdev=35.64 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 43], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 58], 00:24:21.431 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 82], 00:24:21.431 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 133], 95.00th=[ 153], 00:24:21.431 | 99.00th=[ 218], 99.50th=[ 249], 99.90th=[ 313], 99.95th=[ 313], 00:24:21.431 | 99.99th=[ 313] 00:24:21.431 bw ( KiB/s): min= 4598, max=17408, per=1.10%, avg=10467.90, stdev=4135.55, samples=20 00:24:21.431 iops : min= 35, max= 136, avg=81.60, stdev=32.52, samples=20 00:24:21.431 lat (msec) : 10=20.86%, 20=23.45%, 50=6.46%, 100=37.35%, 250=11.63% 00:24:21.431 lat (msec) : 500=0.25% 00:24:21.431 cpu : usr=0.61%, sys=0.38%, ctx=2634, majf=0, minf=5 00:24:21.431 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 issued rwts: total=800,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.431 job67: (groupid=0, jobs=1): err= 0: pid=71755: Mon Jul 22 17:02:22 2024 00:24:21.431 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8947msec) 00:24:21.431 slat (usec): min=6, max=4511, avg=59.92, stdev=192.18 00:24:21.431 clat (usec): min=4496, max=69739, avg=13317.37, stdev=8598.07 00:24:21.431 lat (usec): min=4614, max=69755, avg=13377.29, stdev=8594.59 00:24:21.431 clat percentiles (usec): 00:24:21.431 | 1.00th=[ 5080], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7635], 00:24:21.431 | 30.00th=[ 8979], 40.00th=[10421], 50.00th=[11338], 60.00th=[12125], 00:24:21.431 | 70.00th=[13435], 80.00th=[16450], 90.00th=[21627], 95.00th=[29754], 00:24:21.431 | 99.00th=[46924], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:24:21.431 | 99.99th=[69731] 00:24:21.431 write: IOPS=93, BW=11.7MiB/s (12.2MB/s)(102MiB/8725msec); 0 zone resets 00:24:21.431 slat (usec): min=36, max=3450, avg=148.69, stdev=240.06 00:24:21.431 clat (usec): min=1530, max=305969, avg=84608.49, stdev=38156.65 00:24:21.431 lat (usec): min=1827, max=306046, avg=84757.19, stdev=38159.60 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 5], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 58], 00:24:21.431 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 85], 00:24:21.431 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 123], 95.00th=[ 153], 00:24:21.431 | 99.00th=[ 249], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 305], 00:24:21.431 | 99.99th=[ 305] 00:24:21.431 bw ( KiB/s): min= 1532, max=20224, per=1.08%, avg=10315.30, stdev=4885.42, samples=20 00:24:21.431 iops : min= 11, max= 158, avg=80.45, stdev=38.24, samples=20 00:24:21.431 lat (msec) : 2=0.12%, 4=0.37%, 10=17.60%, 20=26.27%, 50=7.25% 00:24:21.431 lat (msec) : 100=36.18%, 250=11.77%, 500=0.43% 00:24:21.431 cpu : usr=0.56%, sys=0.39%, ctx=2677, majf=0, minf=5 00:24:21.431 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 issued rwts: total=800,814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.431 job68: (groupid=0, jobs=1): err= 0: pid=71756: Mon Jul 22 17:02:22 2024 00:24:21.431 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(100MiB/8508msec) 00:24:21.431 slat (usec): min=6, max=2991, avg=63.79, stdev=161.89 00:24:21.431 clat (usec): min=4539, max=37371, avg=12463.51, stdev=5528.49 00:24:21.431 lat (usec): min=4665, max=37382, avg=12527.30, stdev=5525.93 00:24:21.431 clat percentiles (usec): 00:24:21.431 | 1.00th=[ 5014], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 7439], 00:24:21.431 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11731], 60.00th=[12780], 00:24:21.431 | 70.00th=[14091], 80.00th=[16188], 90.00th=[19006], 95.00th=[23462], 00:24:21.431 | 99.00th=[31851], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:24:21.431 | 99.99th=[37487] 00:24:21.431 write: IOPS=98, BW=12.4MiB/s (12.9MB/s)(109MiB/8785msec); 0 zone resets 00:24:21.431 slat (usec): min=32, max=2646, avg=139.18, stdev=239.46 00:24:21.431 clat (msec): min=37, max=245, avg=80.25, stdev=29.41 00:24:21.431 lat (msec): min=37, max=245, avg=80.39, stdev=29.41 00:24:21.431 clat percentiles (msec): 00:24:21.431 | 1.00th=[ 42], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 56], 00:24:21.431 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 81], 00:24:21.431 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 140], 00:24:21.431 | 99.00th=[ 176], 99.50th=[ 197], 99.90th=[ 247], 99.95th=[ 247], 00:24:21.431 | 99.99th=[ 247] 00:24:21.431 bw ( KiB/s): min= 4096, max=18906, per=1.16%, avg=11018.05, stdev=4005.74, samples=20 00:24:21.431 iops : min= 32, max= 147, avg=85.95, stdev=31.21, samples=20 00:24:21.431 lat (msec) : 10=16.85%, 20=27.16%, 50=7.25%, 100=38.49%, 250=10.25% 00:24:21.431 cpu : usr=0.72%, sys=0.30%, ctx=2538, majf=0, minf=3 00:24:21.431 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.431 issued rwts: total=800,868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.431 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.431 job69: (groupid=0, jobs=1): err= 0: pid=71757: Mon Jul 22 17:02:22 2024 00:24:21.431 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(80.0MiB/8124msec) 00:24:21.431 slat (usec): min=7, max=2169, avg=64.09, stdev=150.34 00:24:21.431 clat (msec): min=3, max=246, avg=25.06, stdev=39.95 00:24:21.431 lat (msec): min=3, max=246, avg=25.12, stdev=39.96 00:24:21.431 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:24:21.432 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.432 | 70.00th=[ 16], 80.00th=[ 23], 90.00th=[ 57], 95.00th=[ 124], 00:24:21.432 | 99.00th=[ 230], 99.50th=[ 232], 99.90th=[ 247], 99.95th=[ 247], 00:24:21.432 | 99.99th=[ 247] 00:24:21.432 write: IOPS=95, BW=12.0MiB/s (12.5MB/s)(95.9MiB/8013msec); 0 zone resets 00:24:21.432 slat (usec): min=37, max=3340, avg=115.06, stdev=173.34 00:24:21.432 clat (msec): min=46, max=360, avg=83.05, stdev=36.33 00:24:21.432 lat (msec): min=47, max=360, avg=83.17, stdev=36.35 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 58], 00:24:21.432 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 82], 00:24:21.432 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 117], 95.00th=[ 142], 00:24:21.432 | 99.00th=[ 257], 99.50th=[ 313], 99.90th=[ 359], 99.95th=[ 359], 00:24:21.432 | 99.99th=[ 359] 00:24:21.432 bw ( KiB/s): min= 1788, max=17920, per=1.08%, avg=10232.63, stdev=5167.52, samples=19 00:24:21.432 iops : min= 13, max= 140, avg=79.74, stdev=40.52, samples=19 00:24:21.432 lat (msec) : 4=0.21%, 10=15.78%, 20=19.05%, 50=8.96%, 100=42.22% 00:24:21.432 lat (msec) : 250=13.22%, 500=0.57% 00:24:21.432 cpu : usr=0.48%, sys=0.32%, ctx=2256, majf=0, minf=5 00:24:21.432 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 issued rwts: total=640,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.432 job70: (groupid=0, jobs=1): err= 0: pid=71758: Mon Jul 22 17:02:22 2024 00:24:21.432 read: IOPS=49, BW=6371KiB/s (6524kB/s)(44.4MiB/7132msec) 00:24:21.432 slat (usec): min=8, max=1018, avg=61.10, stdev=96.04 00:24:21.432 clat (msec): min=5, max=223, avg=29.96, stdev=39.45 00:24:21.432 lat (msec): min=5, max=223, avg=30.02, stdev=39.46 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 12], 00:24:21.432 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:24:21.432 | 70.00th=[ 25], 80.00th=[ 33], 90.00th=[ 58], 95.00th=[ 94], 00:24:21.432 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 224], 99.95th=[ 224], 00:24:21.432 | 99.99th=[ 224] 00:24:21.432 write: IOPS=55, BW=7087KiB/s (7257kB/s)(60.0MiB/8670msec); 0 zone resets 00:24:21.432 slat (usec): min=38, max=2206, avg=137.28, stdev=181.34 00:24:21.432 clat (msec): min=69, max=515, avg=143.70, stdev=66.58 00:24:21.432 lat (msec): min=70, max=515, avg=143.84, stdev=66.59 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 91], 00:24:21.432 | 30.00th=[ 99], 40.00th=[ 107], 50.00th=[ 125], 60.00th=[ 148], 00:24:21.432 | 70.00th=[ 165], 80.00th=[ 190], 90.00th=[ 226], 95.00th=[ 257], 00:24:21.432 | 99.00th=[ 388], 99.50th=[ 447], 99.90th=[ 514], 99.95th=[ 514], 00:24:21.432 | 99.99th=[ 514] 00:24:21.432 bw ( KiB/s): min= 1532, max=12288, per=0.69%, avg=6568.33, stdev=3130.91, samples=18 00:24:21.432 iops : min= 11, max= 96, avg=51.06, stdev=24.70, samples=18 00:24:21.432 lat (msec) : 10=5.15%, 20=19.28%, 50=13.17%, 100=21.92%, 250=37.01% 00:24:21.432 lat (msec) : 500=3.23%, 750=0.24% 00:24:21.432 cpu : usr=0.30%, sys=0.19%, ctx=1463, majf=0, minf=5 00:24:21.432 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 issued rwts: total=355,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.432 job71: (groupid=0, jobs=1): err= 0: pid=71759: Mon Jul 22 17:02:22 2024 00:24:21.432 read: IOPS=56, BW=7256KiB/s (7431kB/s)(60.0MiB/8467msec) 00:24:21.432 slat (usec): min=7, max=1811, avg=53.01, stdev=113.08 00:24:21.432 clat (msec): min=10, max=204, avg=25.87, stdev=25.67 00:24:21.432 lat (msec): min=10, max=204, avg=25.92, stdev=25.68 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:24:21.432 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 21], 00:24:21.432 | 70.00th=[ 25], 80.00th=[ 31], 90.00th=[ 40], 95.00th=[ 77], 00:24:21.432 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:24:21.432 | 99.99th=[ 205] 00:24:21.432 write: IOPS=74, BW=9514KiB/s (9743kB/s)(79.1MiB/8516msec); 0 zone resets 00:24:21.432 slat (usec): min=40, max=2588, avg=141.66, stdev=192.81 00:24:21.432 clat (msec): min=12, max=341, avg=106.63, stdev=38.37 00:24:21.432 lat (msec): min=12, max=342, avg=106.77, stdev=38.37 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 29], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.432 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 103], 00:24:21.432 | 70.00th=[ 113], 80.00th=[ 131], 90.00th=[ 153], 95.00th=[ 178], 00:24:21.432 | 99.00th=[ 243], 99.50th=[ 271], 99.90th=[ 342], 99.95th=[ 342], 00:24:21.432 | 99.99th=[ 342] 00:24:21.432 bw ( KiB/s): min= 1788, max=13824, per=0.84%, avg=8009.25, stdev=3804.39, samples=20 00:24:21.432 iops : min= 13, max= 108, avg=62.40, stdev=29.80, samples=20 00:24:21.432 lat (msec) : 20=26.06%, 50=14.56%, 100=33.60%, 250=25.34%, 500=0.45% 00:24:21.432 cpu : usr=0.40%, sys=0.26%, ctx=1822, majf=0, minf=3 00:24:21.432 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 issued rwts: total=480,633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.432 job72: (groupid=0, jobs=1): err= 0: pid=71760: Mon Jul 22 17:02:22 2024 00:24:21.432 read: IOPS=61, BW=7871KiB/s (8060kB/s)(60.0MiB/7806msec) 00:24:21.432 slat (usec): min=7, max=937, avg=75.91, stdev=140.50 00:24:21.432 clat (usec): min=7061, max=68334, avg=17452.37, stdev=10296.22 00:24:21.432 lat (usec): min=7120, max=68352, avg=17528.28, stdev=10292.55 00:24:21.432 clat percentiles (usec): 00:24:21.432 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[11207], 00:24:21.432 | 30.00th=[11994], 40.00th=[12911], 50.00th=[14222], 60.00th=[15270], 00:24:21.432 | 70.00th=[16909], 80.00th=[19792], 90.00th=[32375], 95.00th=[42730], 00:24:21.432 | 99.00th=[52691], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:24:21.432 | 99.99th=[68682] 00:24:21.432 write: IOPS=64, BW=8195KiB/s (8392kB/s)(72.1MiB/9012msec); 0 zone resets 00:24:21.432 slat (usec): min=38, max=2344, avg=140.44, stdev=212.96 00:24:21.432 clat (msec): min=37, max=457, avg=123.90, stdev=59.96 00:24:21.432 lat (msec): min=38, max=457, avg=124.04, stdev=59.96 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 45], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 81], 00:24:21.432 | 30.00th=[ 88], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 112], 00:24:21.432 | 70.00th=[ 129], 80.00th=[ 169], 90.00th=[ 207], 95.00th=[ 249], 00:24:21.432 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 460], 99.95th=[ 460], 00:24:21.432 | 99.99th=[ 460] 00:24:21.432 bw ( KiB/s): min= 1536, max=11752, per=0.77%, avg=7279.00, stdev=3560.27, samples=20 00:24:21.432 iops : min= 12, max= 91, avg=56.70, stdev=27.67, samples=20 00:24:21.432 lat (msec) : 10=4.54%, 20=31.98%, 50=8.70%, 100=26.40%, 250=25.83% 00:24:21.432 lat (msec) : 500=2.55% 00:24:21.432 cpu : usr=0.41%, sys=0.22%, ctx=1786, majf=0, minf=1 00:24:21.432 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.432 issued rwts: total=480,577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.432 job73: (groupid=0, jobs=1): err= 0: pid=71761: Mon Jul 22 17:02:22 2024 00:24:21.432 read: IOPS=64, BW=8281KiB/s (8480kB/s)(60.0MiB/7419msec) 00:24:21.432 slat (usec): min=8, max=769, avg=47.30, stdev=76.01 00:24:21.432 clat (msec): min=6, max=456, avg=30.91, stdev=64.14 00:24:21.432 lat (msec): min=6, max=456, avg=30.96, stdev=64.14 00:24:21.432 clat percentiles (msec): 00:24:21.432 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:24:21.432 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:24:21.433 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 37], 95.00th=[ 171], 00:24:21.433 | 99.00th=[ 418], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 456], 00:24:21.433 | 99.99th=[ 456] 00:24:21.433 write: IOPS=59, BW=7596KiB/s (7779kB/s)(61.0MiB/8223msec); 0 zone resets 00:24:21.433 slat (usec): min=33, max=3424, avg=145.02, stdev=219.14 00:24:21.433 clat (msec): min=16, max=355, avg=133.91, stdev=59.39 00:24:21.433 lat (msec): min=16, max=355, avg=134.05, stdev=59.41 00:24:21.433 clat percentiles (msec): 00:24:21.433 | 1.00th=[ 32], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 85], 00:24:21.433 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 117], 60.00th=[ 140], 00:24:21.433 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 218], 95.00th=[ 255], 00:24:21.433 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:24:21.433 | 99.99th=[ 355] 00:24:21.433 bw ( KiB/s): min= 1024, max=12032, per=0.72%, avg=6823.50, stdev=3270.48, samples=18 00:24:21.433 iops : min= 8, max= 94, avg=53.11, stdev=25.49, samples=18 00:24:21.433 lat (msec) : 10=6.51%, 20=32.02%, 50=8.57%, 100=17.67%, 250=30.99% 00:24:21.433 lat (msec) : 500=4.24% 00:24:21.433 cpu : usr=0.36%, sys=0.21%, ctx=1623, majf=0, minf=7 00:24:21.433 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 issued rwts: total=480,488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.433 job74: (groupid=0, jobs=1): err= 0: pid=71762: Mon Jul 22 17:02:22 2024 00:24:21.433 read: IOPS=63, BW=8079KiB/s (8273kB/s)(60.0MiB/7605msec) 00:24:21.433 slat (usec): min=7, max=1051, avg=60.62, stdev=123.50 00:24:21.433 clat (usec): min=7928, max=58514, avg=17384.25, stdev=7210.50 00:24:21.433 lat (usec): min=8102, max=58526, avg=17444.87, stdev=7195.43 00:24:21.433 clat percentiles (usec): 00:24:21.433 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11600], 00:24:21.433 | 30.00th=[12256], 40.00th=[13960], 50.00th=[15533], 60.00th=[17957], 00:24:21.433 | 70.00th=[19530], 80.00th=[22414], 90.00th=[26084], 95.00th=[29754], 00:24:21.433 | 99.00th=[47449], 99.50th=[47449], 99.90th=[58459], 99.95th=[58459], 00:24:21.433 | 99.99th=[58459] 00:24:21.433 write: IOPS=65, BW=8406KiB/s (8608kB/s)(74.0MiB/9014msec); 0 zone resets 00:24:21.433 slat (usec): min=39, max=1098, avg=129.27, stdev=136.97 00:24:21.433 clat (msec): min=63, max=337, avg=120.89, stdev=53.60 00:24:21.433 lat (msec): min=63, max=337, avg=121.02, stdev=53.62 00:24:21.433 clat percentiles (msec): 00:24:21.433 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 81], 00:24:21.433 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 100], 60.00th=[ 113], 00:24:21.433 | 70.00th=[ 133], 80.00th=[ 155], 90.00th=[ 201], 95.00th=[ 245], 00:24:21.433 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 338], 99.95th=[ 338], 00:24:21.433 | 99.99th=[ 338] 00:24:21.433 bw ( KiB/s): min= 768, max=11752, per=0.79%, avg=7482.95, stdev=3333.73, samples=20 00:24:21.433 iops : min= 6, max= 91, avg=58.25, stdev=26.03, samples=20 00:24:21.433 lat (msec) : 10=3.17%, 20=29.10%, 50=12.41%, 100=28.54%, 250=24.63% 00:24:21.433 lat (msec) : 500=2.15% 00:24:21.433 cpu : usr=0.38%, sys=0.25%, ctx=1810, majf=0, minf=1 00:24:21.433 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 issued rwts: total=480,592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.433 job75: (groupid=0, jobs=1): err= 0: pid=71763: Mon Jul 22 17:02:22 2024 00:24:21.433 read: IOPS=58, BW=7467KiB/s (7646kB/s)(60.0MiB/8228msec) 00:24:21.433 slat (usec): min=8, max=1348, avg=62.02, stdev=122.33 00:24:21.433 clat (usec): min=10115, max=92635, avg=21456.03, stdev=13473.45 00:24:21.433 lat (usec): min=10133, max=92650, avg=21518.05, stdev=13495.90 00:24:21.433 clat percentiles (usec): 00:24:21.433 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11994], 20.00th=[13435], 00:24:21.433 | 30.00th=[14222], 40.00th=[15664], 50.00th=[17695], 60.00th=[19792], 00:24:21.433 | 70.00th=[21365], 80.00th=[23462], 90.00th=[36439], 95.00th=[47449], 00:24:21.433 | 99.00th=[84411], 99.50th=[85459], 99.90th=[92799], 99.95th=[92799], 00:24:21.433 | 99.99th=[92799] 00:24:21.433 write: IOPS=70, BW=9011KiB/s (9227kB/s)(77.2MiB/8779msec); 0 zone resets 00:24:21.433 slat (usec): min=31, max=1737, avg=120.97, stdev=147.14 00:24:21.433 clat (msec): min=20, max=393, avg=112.54, stdev=51.22 00:24:21.433 lat (msec): min=20, max=393, avg=112.66, stdev=51.24 00:24:21.433 clat percentiles (msec): 00:24:21.433 | 1.00th=[ 37], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 78], 00:24:21.433 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 101], 60.00th=[ 106], 00:24:21.433 | 70.00th=[ 115], 80.00th=[ 130], 90.00th=[ 171], 95.00th=[ 218], 00:24:21.433 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 393], 99.95th=[ 393], 00:24:21.433 | 99.99th=[ 393] 00:24:21.433 bw ( KiB/s): min= 1792, max=13312, per=0.86%, avg=8216.00, stdev=3637.60, samples=19 00:24:21.433 iops : min= 14, max= 104, avg=64.00, stdev=28.51, samples=19 00:24:21.433 lat (msec) : 20=26.78%, 50=15.57%, 100=30.05%, 250=25.77%, 500=1.82% 00:24:21.433 cpu : usr=0.40%, sys=0.26%, ctx=1776, majf=0, minf=5 00:24:21.433 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 issued rwts: total=480,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.433 job76: (groupid=0, jobs=1): err= 0: pid=71764: Mon Jul 22 17:02:22 2024 00:24:21.433 read: IOPS=61, BW=7821KiB/s (8008kB/s)(60.0MiB/7856msec) 00:24:21.433 slat (usec): min=7, max=1538, avg=54.87, stdev=104.66 00:24:21.433 clat (usec): min=10891, max=75375, avg=19039.28, stdev=8381.60 00:24:21.433 lat (usec): min=10952, max=75383, avg=19094.15, stdev=8379.21 00:24:21.433 clat percentiles (usec): 00:24:21.433 | 1.00th=[11338], 5.00th=[11731], 10.00th=[12125], 20.00th=[12911], 00:24:21.433 | 30.00th=[14222], 40.00th=[15270], 50.00th=[16909], 60.00th=[18482], 00:24:21.433 | 70.00th=[20317], 80.00th=[22414], 90.00th=[29230], 95.00th=[34866], 00:24:21.433 | 99.00th=[55837], 99.50th=[56886], 99.90th=[74974], 99.95th=[74974], 00:24:21.433 | 99.99th=[74974] 00:24:21.433 write: IOPS=68, BW=8809KiB/s (9021kB/s)(76.6MiB/8907msec); 0 zone resets 00:24:21.433 slat (usec): min=39, max=1743, avg=143.92, stdev=184.90 00:24:21.433 clat (msec): min=55, max=407, avg=115.22, stdev=55.31 00:24:21.433 lat (msec): min=55, max=407, avg=115.37, stdev=55.32 00:24:21.433 clat percentiles (msec): 00:24:21.433 | 1.00th=[ 63], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.433 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 97], 60.00th=[ 103], 00:24:21.433 | 70.00th=[ 114], 80.00th=[ 136], 90.00th=[ 201], 95.00th=[ 241], 00:24:21.433 | 99.00th=[ 317], 99.50th=[ 351], 99.90th=[ 409], 99.95th=[ 409], 00:24:21.433 | 99.99th=[ 409] 00:24:21.433 bw ( KiB/s): min= 1792, max=12800, per=0.81%, avg=7742.10, stdev=3831.54, samples=20 00:24:21.433 iops : min= 14, max= 100, avg=60.35, stdev=29.84, samples=20 00:24:21.433 lat (msec) : 20=29.73%, 50=13.54%, 100=33.12%, 250=21.50%, 500=2.10% 00:24:21.433 cpu : usr=0.39%, sys=0.25%, ctx=1878, majf=0, minf=3 00:24:21.433 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 issued rwts: total=480,613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.433 job77: (groupid=0, jobs=1): err= 0: pid=71765: Mon Jul 22 17:02:22 2024 00:24:21.433 read: IOPS=60, BW=7741KiB/s (7927kB/s)(60.0MiB/7937msec) 00:24:21.433 slat (usec): min=8, max=1906, avg=89.65, stdev=202.25 00:24:21.433 clat (usec): min=9370, max=62539, avg=19949.55, stdev=8160.01 00:24:21.433 lat (usec): min=9458, max=62548, avg=20039.20, stdev=8161.47 00:24:21.433 clat percentiles (usec): 00:24:21.433 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[12387], 20.00th=[13960], 00:24:21.433 | 30.00th=[14877], 40.00th=[15926], 50.00th=[17957], 60.00th=[19268], 00:24:21.433 | 70.00th=[22152], 80.00th=[24249], 90.00th=[31589], 95.00th=[35914], 00:24:21.433 | 99.00th=[49546], 99.50th=[49546], 99.90th=[62653], 99.95th=[62653], 00:24:21.433 | 99.99th=[62653] 00:24:21.433 write: IOPS=68, BW=8817KiB/s (9029kB/s)(76.1MiB/8841msec); 0 zone resets 00:24:21.433 slat (usec): min=40, max=2744, avg=123.12, stdev=182.22 00:24:21.433 clat (msec): min=69, max=421, avg=115.00, stdev=56.56 00:24:21.433 lat (msec): min=69, max=421, avg=115.12, stdev=56.56 00:24:21.433 clat percentiles (msec): 00:24:21.433 | 1.00th=[ 71], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.433 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 103], 00:24:21.433 | 70.00th=[ 112], 80.00th=[ 138], 90.00th=[ 199], 95.00th=[ 232], 00:24:21.433 | 99.00th=[ 330], 99.50th=[ 368], 99.90th=[ 422], 99.95th=[ 422], 00:24:21.433 | 99.99th=[ 422] 00:24:21.433 bw ( KiB/s): min= 1792, max=13568, per=0.81%, avg=7688.75, stdev=3861.80, samples=20 00:24:21.433 iops : min= 14, max= 106, avg=59.80, stdev=30.17, samples=20 00:24:21.433 lat (msec) : 10=1.10%, 20=26.35%, 50=16.44%, 100=32.23%, 250=21.76% 00:24:21.433 lat (msec) : 500=2.11% 00:24:21.433 cpu : usr=0.40%, sys=0.27%, ctx=1755, majf=0, minf=3 00:24:21.433 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.433 issued rwts: total=480,609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.433 job78: (groupid=0, jobs=1): err= 0: pid=71766: Mon Jul 22 17:02:22 2024 00:24:21.434 read: IOPS=57, BW=7320KiB/s (7495kB/s)(60.0MiB/8394msec) 00:24:21.434 slat (usec): min=8, max=8855, avg=92.85, stdev=432.76 00:24:21.434 clat (msec): min=6, max=134, avg=27.33, stdev=18.06 00:24:21.434 lat (msec): min=6, max=134, avg=27.42, stdev=18.06 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:24:21.434 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 24], 00:24:21.434 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 43], 95.00th=[ 60], 00:24:21.434 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 134], 00:24:21.434 | 99.99th=[ 134] 00:24:21.434 write: IOPS=75, BW=9721KiB/s (9954kB/s)(80.0MiB/8427msec); 0 zone resets 00:24:21.434 slat (usec): min=39, max=2680, avg=134.95, stdev=192.73 00:24:21.434 clat (msec): min=3, max=281, avg=104.47, stdev=36.95 00:24:21.434 lat (msec): min=4, max=281, avg=104.60, stdev=36.96 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 9], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 79], 00:24:21.434 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 97], 60.00th=[ 103], 00:24:21.434 | 70.00th=[ 112], 80.00th=[ 128], 90.00th=[ 155], 95.00th=[ 171], 00:24:21.434 | 99.00th=[ 218], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 284], 00:24:21.434 | 99.99th=[ 284] 00:24:21.434 bw ( KiB/s): min= 1792, max=15872, per=0.90%, avg=8525.32, stdev=3704.79, samples=19 00:24:21.434 iops : min= 14, max= 124, avg=66.42, stdev=28.98, samples=19 00:24:21.434 lat (msec) : 4=0.18%, 10=0.89%, 20=14.02%, 50=26.96%, 100=31.79% 00:24:21.434 lat (msec) : 250=25.80%, 500=0.36% 00:24:21.434 cpu : usr=0.42%, sys=0.24%, ctx=1926, majf=0, minf=5 00:24:21.434 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.434 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.434 job79: (groupid=0, jobs=1): err= 0: pid=71767: Mon Jul 22 17:02:22 2024 00:24:21.434 read: IOPS=57, BW=7382KiB/s (7559kB/s)(60.0MiB/8323msec) 00:24:21.434 slat (usec): min=7, max=1156, avg=56.99, stdev=105.68 00:24:21.434 clat (msec): min=11, max=115, avg=27.06, stdev=15.80 00:24:21.434 lat (msec): min=11, max=115, avg=27.12, stdev=15.80 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 17], 00:24:21.434 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 26], 00:24:21.434 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 45], 95.00th=[ 56], 00:24:21.434 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 115], 99.95th=[ 115], 00:24:21.434 | 99.99th=[ 115] 00:24:21.434 write: IOPS=76, BW=9741KiB/s (9975kB/s)(80.0MiB/8410msec); 0 zone resets 00:24:21.434 slat (usec): min=31, max=2588, avg=127.12, stdev=195.57 00:24:21.434 clat (msec): min=46, max=336, avg=104.11, stdev=37.55 00:24:21.434 lat (msec): min=46, max=336, avg=104.24, stdev=37.55 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 55], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:24:21.434 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 100], 00:24:21.434 | 70.00th=[ 108], 80.00th=[ 130], 90.00th=[ 153], 95.00th=[ 176], 00:24:21.434 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 338], 99.95th=[ 338], 00:24:21.434 | 99.99th=[ 338] 00:24:21.434 bw ( KiB/s): min= 1792, max=13824, per=0.95%, avg=8999.06, stdev=3281.21, samples=18 00:24:21.434 iops : min= 14, max= 108, avg=70.17, stdev=25.56, samples=18 00:24:21.434 lat (msec) : 20=17.68%, 50=22.50%, 100=37.50%, 250=21.79%, 500=0.54% 00:24:21.434 cpu : usr=0.44%, sys=0.22%, ctx=1862, majf=0, minf=7 00:24:21.434 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.434 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.434 job80: (groupid=0, jobs=1): err= 0: pid=71768: Mon Jul 22 17:02:22 2024 00:24:21.434 read: IOPS=59, BW=7590KiB/s (7772kB/s)(60.0MiB/8095msec) 00:24:21.434 slat (usec): min=7, max=1214, avg=67.87, stdev=139.17 00:24:21.434 clat (usec): min=12368, max=76101, avg=25971.07, stdev=11125.79 00:24:21.434 lat (usec): min=12382, max=76109, avg=26038.94, stdev=11136.22 00:24:21.434 clat percentiles (usec): 00:24:21.434 | 1.00th=[12780], 5.00th=[14615], 10.00th=[15664], 20.00th=[18220], 00:24:21.434 | 30.00th=[19530], 40.00th=[20579], 50.00th=[22676], 60.00th=[24249], 00:24:21.434 | 70.00th=[26870], 80.00th=[32375], 90.00th=[42730], 95.00th=[50594], 00:24:21.434 | 99.00th=[66847], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:24:21.434 | 99.99th=[76022] 00:24:21.434 write: IOPS=72, BW=9224KiB/s (9445kB/s)(76.5MiB/8493msec); 0 zone resets 00:24:21.434 slat (usec): min=38, max=2041, avg=124.45, stdev=164.89 00:24:21.434 clat (msec): min=61, max=456, avg=109.75, stdev=56.88 00:24:21.434 lat (msec): min=62, max=456, avg=109.87, stdev=56.87 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 78], 00:24:21.434 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 97], 00:24:21.434 | 70.00th=[ 106], 80.00th=[ 124], 90.00th=[ 163], 95.00th=[ 226], 00:24:21.434 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 456], 00:24:21.434 | 99.99th=[ 456] 00:24:21.434 bw ( KiB/s): min= 2048, max=12288, per=0.90%, avg=8590.28, stdev=3532.77, samples=18 00:24:21.434 iops : min= 16, max= 96, avg=67.00, stdev=27.60, samples=18 00:24:21.434 lat (msec) : 20=15.75%, 50=25.82%, 100=38.55%, 250=17.67%, 500=2.20% 00:24:21.434 cpu : usr=0.39%, sys=0.28%, ctx=1766, majf=0, minf=3 00:24:21.434 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 issued rwts: total=480,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.434 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.434 job81: (groupid=0, jobs=1): err= 0: pid=71769: Mon Jul 22 17:02:22 2024 00:24:21.434 read: IOPS=46, BW=5960KiB/s (6103kB/s)(45.2MiB/7774msec) 00:24:21.434 slat (usec): min=8, max=1494, avg=84.96, stdev=181.46 00:24:21.434 clat (msec): min=5, max=387, avg=48.67, stdev=66.96 00:24:21.434 lat (msec): min=5, max=387, avg=48.76, stdev=66.95 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 14], 00:24:21.434 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 25], 60.00th=[ 27], 00:24:21.434 | 70.00th=[ 32], 80.00th=[ 52], 90.00th=[ 150], 95.00th=[ 176], 00:24:21.434 | 99.00th=[ 347], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:24:21.434 | 99.99th=[ 388] 00:24:21.434 write: IOPS=61, BW=7878KiB/s (8067kB/s)(60.0MiB/7799msec); 0 zone resets 00:24:21.434 slat (usec): min=31, max=3661, avg=173.95, stdev=299.14 00:24:21.434 clat (msec): min=51, max=415, avg=128.95, stdev=59.28 00:24:21.434 lat (msec): min=51, max=415, avg=129.13, stdev=59.31 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 57], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.434 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 128], 00:24:21.434 | 70.00th=[ 148], 80.00th=[ 180], 90.00th=[ 211], 95.00th=[ 245], 00:24:21.434 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 418], 00:24:21.434 | 99.99th=[ 418] 00:24:21.434 bw ( KiB/s): min= 768, max=11799, per=0.72%, avg=6825.89, stdev=3380.92, samples=18 00:24:21.434 iops : min= 6, max= 92, avg=53.17, stdev=26.45, samples=18 00:24:21.434 lat (msec) : 10=5.34%, 20=10.33%, 50=18.53%, 100=27.32%, 250=35.15% 00:24:21.434 lat (msec) : 500=3.33% 00:24:21.434 cpu : usr=0.26%, sys=0.28%, ctx=1430, majf=0, minf=7 00:24:21.434 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.434 issued rwts: total=362,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.434 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.434 job82: (groupid=0, jobs=1): err= 0: pid=71770: Mon Jul 22 17:02:22 2024 00:24:21.434 read: IOPS=62, BW=7960KiB/s (8151kB/s)(64.9MiB/8346msec) 00:24:21.434 slat (usec): min=7, max=992, avg=67.48, stdev=123.25 00:24:21.434 clat (usec): min=6556, max=72784, avg=19633.78, stdev=9924.23 00:24:21.434 lat (usec): min=6753, max=72798, avg=19701.26, stdev=9921.91 00:24:21.434 clat percentiles (usec): 00:24:21.434 | 1.00th=[ 6980], 5.00th=[10552], 10.00th=[11600], 20.00th=[12256], 00:24:21.434 | 30.00th=[13435], 40.00th=[15270], 50.00th=[17695], 60.00th=[19792], 00:24:21.434 | 70.00th=[21890], 80.00th=[25297], 90.00th=[29492], 95.00th=[35914], 00:24:21.434 | 99.00th=[66847], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:24:21.434 | 99.99th=[72877] 00:24:21.434 write: IOPS=73, BW=9381KiB/s (9606kB/s)(80.0MiB/8733msec); 0 zone resets 00:24:21.434 slat (usec): min=31, max=3090, avg=145.92, stdev=238.99 00:24:21.434 clat (msec): min=10, max=364, avg=108.33, stdev=48.50 00:24:21.434 lat (msec): min=10, max=364, avg=108.47, stdev=48.49 00:24:21.434 clat percentiles (msec): 00:24:21.434 | 1.00th=[ 19], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.434 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 99], 00:24:21.434 | 70.00th=[ 109], 80.00th=[ 129], 90.00th=[ 186], 95.00th=[ 207], 00:24:21.434 | 99.00th=[ 288], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 363], 00:24:21.434 | 99.99th=[ 363] 00:24:21.434 bw ( KiB/s): min= 1536, max=13056, per=0.86%, avg=8193.70, stdev=4049.67, samples=20 00:24:21.434 iops : min= 12, max= 102, avg=63.95, stdev=31.63, samples=20 00:24:21.434 lat (msec) : 10=2.07%, 20=25.97%, 50=16.91%, 100=34.25%, 250=19.84% 00:24:21.434 lat (msec) : 500=0.95% 00:24:21.434 cpu : usr=0.41%, sys=0.29%, ctx=1883, majf=0, minf=3 00:24:21.434 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 issued rwts: total=519,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.435 job83: (groupid=0, jobs=1): err= 0: pid=71771: Mon Jul 22 17:02:22 2024 00:24:21.435 read: IOPS=60, BW=7802KiB/s (7989kB/s)(60.0MiB/7875msec) 00:24:21.435 slat (usec): min=8, max=1216, avg=65.85, stdev=126.11 00:24:21.435 clat (usec): min=6961, max=61139, avg=20131.77, stdev=10296.82 00:24:21.435 lat (usec): min=7005, max=61156, avg=20197.62, stdev=10300.94 00:24:21.435 clat percentiles (usec): 00:24:21.435 | 1.00th=[ 7046], 5.00th=[ 7308], 10.00th=[ 8717], 20.00th=[11994], 00:24:21.435 | 30.00th=[14091], 40.00th=[15795], 50.00th=[17433], 60.00th=[19792], 00:24:21.435 | 70.00th=[23462], 80.00th=[26346], 90.00th=[35914], 95.00th=[40633], 00:24:21.435 | 99.00th=[51643], 99.50th=[55837], 99.90th=[61080], 99.95th=[61080], 00:24:21.435 | 99.99th=[61080] 00:24:21.435 write: IOPS=68, BW=8809KiB/s (9021kB/s)(76.1MiB/8849msec); 0 zone resets 00:24:21.435 slat (usec): min=33, max=1386, avg=124.93, stdev=141.54 00:24:21.435 clat (msec): min=20, max=433, avg=114.99, stdev=60.89 00:24:21.435 lat (msec): min=20, max=433, avg=115.12, stdev=60.91 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 29], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.435 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 99], 00:24:21.435 | 70.00th=[ 111], 80.00th=[ 138], 90.00th=[ 203], 95.00th=[ 243], 00:24:21.435 | 99.00th=[ 409], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 435], 00:24:21.435 | 99.99th=[ 435] 00:24:21.435 bw ( KiB/s): min= 1536, max=12544, per=0.81%, avg=7689.25, stdev=4056.91, samples=20 00:24:21.435 iops : min= 12, max= 98, avg=59.90, stdev=31.76, samples=20 00:24:21.435 lat (msec) : 10=5.51%, 20=20.94%, 50=17.72%, 100=34.34%, 250=19.56% 00:24:21.435 lat (msec) : 500=1.93% 00:24:21.435 cpu : usr=0.43%, sys=0.21%, ctx=1857, majf=0, minf=5 00:24:21.435 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 issued rwts: total=480,609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.435 job84: (groupid=0, jobs=1): err= 0: pid=71772: Mon Jul 22 17:02:22 2024 00:24:21.435 read: IOPS=64, BW=8194KiB/s (8391kB/s)(60.0MiB/7498msec) 00:24:21.435 slat (usec): min=6, max=834, avg=53.36, stdev=94.20 00:24:21.435 clat (msec): min=5, max=274, avg=26.14, stdev=38.37 00:24:21.435 lat (msec): min=6, max=274, avg=26.20, stdev=38.39 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:24:21.435 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:24:21.435 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 40], 95.00th=[ 85], 00:24:21.435 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:24:21.435 | 99.99th=[ 275] 00:24:21.435 write: IOPS=59, BW=7668KiB/s (7852kB/s)(63.5MiB/8480msec); 0 zone resets 00:24:21.435 slat (usec): min=38, max=5014, avg=148.28, stdev=268.46 00:24:21.435 clat (msec): min=70, max=388, avg=132.51, stdev=63.62 00:24:21.435 lat (msec): min=70, max=388, avg=132.66, stdev=63.63 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 83], 00:24:21.435 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 118], 00:24:21.435 | 70.00th=[ 150], 80.00th=[ 192], 90.00th=[ 232], 95.00th=[ 262], 00:24:21.435 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 388], 99.95th=[ 388], 00:24:21.435 | 99.99th=[ 388] 00:24:21.435 bw ( KiB/s): min= 2304, max=12032, per=0.75%, avg=7106.06, stdev=3326.66, samples=18 00:24:21.435 iops : min= 18, max= 94, avg=55.28, stdev=26.02, samples=18 00:24:21.435 lat (msec) : 10=6.98%, 20=26.72%, 50=10.83%, 100=26.52%, 250=24.90% 00:24:21.435 lat (msec) : 500=4.05% 00:24:21.435 cpu : usr=0.30%, sys=0.28%, ctx=1649, majf=0, minf=7 00:24:21.435 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 issued rwts: total=480,508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.435 job85: (groupid=0, jobs=1): err= 0: pid=71773: Mon Jul 22 17:02:22 2024 00:24:21.435 read: IOPS=60, BW=7729KiB/s (7915kB/s)(60.0MiB/7949msec) 00:24:21.435 slat (usec): min=7, max=1148, avg=62.34, stdev=105.34 00:24:21.435 clat (msec): min=13, max=145, avg=28.49, stdev=18.15 00:24:21.435 lat (msec): min=13, max=145, avg=28.55, stdev=18.15 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 20], 00:24:21.435 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 25], 60.00th=[ 27], 00:24:21.435 | 70.00th=[ 29], 80.00th=[ 32], 90.00th=[ 42], 95.00th=[ 51], 00:24:21.435 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:24:21.435 | 99.99th=[ 146] 00:24:21.435 write: IOPS=70, BW=9053KiB/s (9270kB/s)(73.8MiB/8342msec); 0 zone resets 00:24:21.435 slat (usec): min=38, max=2122, avg=143.53, stdev=205.80 00:24:21.435 clat (msec): min=44, max=404, avg=111.72, stdev=59.28 00:24:21.435 lat (msec): min=45, max=405, avg=111.86, stdev=59.28 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 51], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.435 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 100], 00:24:21.435 | 70.00th=[ 108], 80.00th=[ 132], 90.00th=[ 178], 95.00th=[ 222], 00:24:21.435 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:24:21.435 | 99.99th=[ 405] 00:24:21.435 bw ( KiB/s): min= 512, max=12800, per=0.83%, avg=7853.32, stdev=4001.19, samples=19 00:24:21.435 iops : min= 4, max= 100, avg=61.21,[2024-07-22 17:02:22.812833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.435 stdev=31.20, samples=19 00:24:21.435 lat (msec) : 20=13.55%, 50=29.35%, 100=34.77%, 250=20.09%, 500=2.24% 00:24:21.435 cpu : usr=0.37%, sys=0.28%, ctx=1798, majf=0, minf=1 00:24:21.435 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 issued rwts: total=480,590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.435 job86: (groupid=0, jobs=1): err= 0: pid=71784: Mon Jul 22 17:02:22 2024 00:24:21.435 read: IOPS=45, BW=5862KiB/s (6003kB/s)(40.0MiB/6987msec) 00:24:21.435 slat (usec): min=7, max=1136, avg=75.44, stdev=158.32 00:24:21.435 clat (msec): min=5, max=237, avg=43.73, stdev=49.87 00:24:21.435 lat (msec): min=5, max=237, avg=43.80, stdev=49.90 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:24:21.435 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 23], 60.00th=[ 30], 00:24:21.435 | 70.00th=[ 39], 80.00th=[ 59], 90.00th=[ 106], 95.00th=[ 138], 00:24:21.435 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 239], 00:24:21.435 | 99.99th=[ 239] 00:24:21.435 write: IOPS=57, BW=7404KiB/s (7582kB/s)(60.0MiB/8298msec); 0 zone resets 00:24:21.435 slat (usec): min=41, max=6580, avg=164.02, stdev=377.27 00:24:21.435 clat (msec): min=72, max=398, avg=137.58, stdev=58.84 00:24:21.435 lat (msec): min=72, max=398, avg=137.75, stdev=58.85 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 75], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 94], 00:24:21.435 | 30.00th=[ 100], 40.00th=[ 106], 50.00th=[ 115], 60.00th=[ 131], 00:24:21.435 | 70.00th=[ 146], 80.00th=[ 182], 90.00th=[ 222], 95.00th=[ 255], 00:24:21.435 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:24:21.435 | 99.99th=[ 401] 00:24:21.435 bw ( KiB/s): min= 1532, max=11776, per=0.67%, avg=6369.84, stdev=3322.77, samples=19 00:24:21.435 iops : min= 11, max= 92, avg=49.47, stdev=26.13, samples=19 00:24:21.435 lat (msec) : 10=0.75%, 20=16.62%, 50=13.50%, 100=22.38%, 250=43.00% 00:24:21.435 lat (msec) : 500=3.75% 00:24:21.435 cpu : usr=0.38%, sys=0.12%, ctx=1348, majf=0, minf=7 00:24:21.435 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.435 issued rwts: total=320,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.435 job87: (groupid=0, jobs=1): err= 0: pid=71785: Mon Jul 22 17:02:22 2024 00:24:21.435 read: IOPS=58, BW=7505KiB/s (7686kB/s)(60.0MiB/8186msec) 00:24:21.435 slat (usec): min=8, max=2125, avg=83.14, stdev=208.64 00:24:21.435 clat (usec): min=15578, max=55679, avg=25530.65, stdev=7680.74 00:24:21.435 lat (usec): min=15622, max=55724, avg=25613.79, stdev=7697.38 00:24:21.435 clat percentiles (usec): 00:24:21.435 | 1.00th=[15664], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:24:21.435 | 30.00th=[20317], 40.00th=[22152], 50.00th=[23725], 60.00th=[25297], 00:24:21.435 | 70.00th=[27395], 80.00th=[30802], 90.00th=[36439], 95.00th=[41681], 00:24:21.435 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:24:21.435 | 99.99th=[55837] 00:24:21.435 write: IOPS=73, BW=9395KiB/s (9620kB/s)(78.2MiB/8529msec); 0 zone resets 00:24:21.435 slat (usec): min=38, max=2512, avg=131.27, stdev=181.02 00:24:21.435 clat (msec): min=38, max=370, avg=107.74, stdev=50.88 00:24:21.435 lat (msec): min=38, max=370, avg=107.87, stdev=50.88 00:24:21.435 clat percentiles (msec): 00:24:21.435 | 1.00th=[ 45], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.435 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 100], 00:24:21.435 | 70.00th=[ 107], 80.00th=[ 120], 90.00th=[ 146], 95.00th=[ 234], 00:24:21.435 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:24:21.435 | 99.99th=[ 372] 00:24:21.435 bw ( KiB/s): min= 1024, max=12544, per=0.88%, avg=8324.42, stdev=3844.54, samples=19 00:24:21.435 iops : min= 8, max= 98, avg=64.84, stdev=30.05, samples=19 00:24:21.436 lat (msec) : 20=12.57%, 50=30.92%, 100=34.72%, 250=19.35%, 500=2.44% 00:24:21.436 cpu : usr=0.41%, sys=0.27%, ctx=1851, majf=0, minf=5 00:24:21.436 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 issued rwts: total=480,626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.436 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.436 job88: (groupid=0, jobs=1): err= 0: pid=71786: Mon Jul 22 17:02:22 2024 00:24:21.436 read: IOPS=57, BW=7297KiB/s (7472kB/s)(60.0MiB/8420msec) 00:24:21.436 slat (usec): min=6, max=916, avg=60.99, stdev=114.32 00:24:21.436 clat (usec): min=8091, max=96109, avg=21139.81, stdev=10710.02 00:24:21.436 lat (usec): min=8195, max=96243, avg=21200.80, stdev=10713.87 00:24:21.436 clat percentiles (usec): 00:24:21.436 | 1.00th=[ 9503], 5.00th=[11076], 10.00th=[12911], 20.00th=[15139], 00:24:21.436 | 30.00th=[16909], 40.00th=[17957], 50.00th=[19006], 60.00th=[19792], 00:24:21.436 | 70.00th=[21103], 80.00th=[24773], 90.00th=[28967], 95.00th=[37487], 00:24:21.436 | 99.00th=[81265], 99.50th=[85459], 99.90th=[95945], 99.95th=[95945], 00:24:21.436 | 99.99th=[95945] 00:24:21.436 write: IOPS=72, BW=9305KiB/s (9528kB/s)(79.6MiB/8763msec); 0 zone resets 00:24:21.436 slat (usec): min=35, max=4053, avg=148.87, stdev=265.80 00:24:21.436 clat (usec): min=1130, max=431115, avg=109191.60, stdev=53378.37 00:24:21.436 lat (usec): min=1201, max=431216, avg=109340.46, stdev=53379.11 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 73], 20.00th=[ 81], 00:24:21.436 | 30.00th=[ 87], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 105], 00:24:21.436 | 70.00th=[ 114], 80.00th=[ 133], 90.00th=[ 176], 95.00th=[ 220], 00:24:21.436 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 430], 99.95th=[ 430], 00:24:21.436 | 99.99th=[ 430] 00:24:21.436 bw ( KiB/s): min= 2816, max=18725, per=0.94%, avg=8960.67, stdev=3741.18, samples=18 00:24:21.436 iops : min= 22, max= 146, avg=69.94, stdev=29.15, samples=18 00:24:21.436 lat (msec) : 2=0.27%, 4=0.18%, 10=2.33%, 20=26.14%, 50=16.29% 00:24:21.436 lat (msec) : 100=29.19%, 250=24.26%, 500=1.34% 00:24:21.436 cpu : usr=0.44%, sys=0.25%, ctx=1816, majf=0, minf=5 00:24:21.436 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 issued rwts: total=480,637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.436 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.436 job89: (groupid=0, jobs=1): err= 0: pid=71787: Mon Jul 22 17:02:22 2024 00:24:21.436 read: IOPS=58, BW=7443KiB/s (7621kB/s)(60.0MiB/8255msec) 00:24:21.436 slat (usec): min=6, max=2703, avg=100.48, stdev=252.79 00:24:21.436 clat (usec): min=13149, max=59217, avg=24721.73, stdev=6326.79 00:24:21.436 lat (usec): min=13164, max=59249, avg=24822.21, stdev=6324.91 00:24:21.436 clat percentiles (usec): 00:24:21.436 | 1.00th=[14484], 5.00th=[16581], 10.00th=[17957], 20.00th=[19530], 00:24:21.436 | 30.00th=[21103], 40.00th=[22414], 50.00th=[23725], 60.00th=[25035], 00:24:21.436 | 70.00th=[26608], 80.00th=[28443], 90.00th=[31851], 95.00th=[37487], 00:24:21.436 | 99.00th=[46400], 99.50th=[47973], 99.90th=[58983], 99.95th=[58983], 00:24:21.436 | 99.99th=[58983] 00:24:21.436 write: IOPS=73, BW=9416KiB/s (9642kB/s)(78.9MiB/8578msec); 0 zone resets 00:24:21.436 slat (usec): min=40, max=2136, avg=144.13, stdev=211.03 00:24:21.436 clat (msec): min=30, max=366, avg=107.55, stdev=53.39 00:24:21.436 lat (msec): min=30, max=366, avg=107.70, stdev=53.39 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 36], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 78], 00:24:21.436 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 97], 00:24:21.436 | 70.00th=[ 105], 80.00th=[ 121], 90.00th=[ 155], 95.00th=[ 251], 00:24:21.436 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:24:21.436 | 99.99th=[ 368] 00:24:21.436 bw ( KiB/s): min= 1792, max=12800, per=0.93%, avg=8858.06, stdev=3455.83, samples=18 00:24:21.436 iops : min= 14, max= 100, avg=69.06, stdev=27.01, samples=18 00:24:21.436 lat (msec) : 20=9.36%, 50=34.38%, 100=35.91%, 250=17.46%, 500=2.88% 00:24:21.436 cpu : usr=0.42%, sys=0.27%, ctx=1847, majf=0, minf=1 00:24:21.436 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 issued rwts: total=480,631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.436 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.436 job90: (groupid=0, jobs=1): err= 0: pid=71788: Mon Jul 22 17:02:22 2024 00:24:21.436 read: IOPS=60, BW=7727KiB/s (7913kB/s)(60.0MiB/7951msec) 00:24:21.436 slat (usec): min=7, max=1512, avg=90.75, stdev=184.36 00:24:21.436 clat (msec): min=9, max=172, avg=29.90, stdev=26.49 00:24:21.436 lat (msec): min=9, max=172, avg=29.99, stdev=26.50 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:24:21.436 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 25], 00:24:21.436 | 70.00th=[ 29], 80.00th=[ 36], 90.00th=[ 54], 95.00th=[ 72], 00:24:21.436 | 99.00th=[ 174], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 174], 00:24:21.436 | 99.99th=[ 174] 00:24:21.436 write: IOPS=69, BW=8933KiB/s (9148kB/s)(72.2MiB/8282msec); 0 zone resets 00:24:21.436 slat (usec): min=39, max=3051, avg=138.52, stdev=203.34 00:24:21.436 clat (msec): min=5, max=507, avg=113.35, stdev=56.25 00:24:21.436 lat (msec): min=5, max=508, avg=113.49, stdev=56.25 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 12], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.436 | 30.00th=[ 84], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 106], 00:24:21.436 | 70.00th=[ 118], 80.00th=[ 136], 90.00th=[ 188], 95.00th=[ 213], 00:24:21.436 | 99.00th=[ 338], 99.50th=[ 464], 99.90th=[ 510], 99.95th=[ 510], 00:24:21.436 | 99.99th=[ 510] 00:24:21.436 bw ( KiB/s): min= 1536, max=12825, per=0.77%, avg=7294.95, stdev=4105.95, samples=20 00:24:21.436 iops : min= 12, max= 100, avg=56.90, stdev=31.98, samples=20 00:24:21.436 lat (msec) : 10=0.57%, 20=19.75%, 50=21.55%, 100=32.61%, 250=24.01% 00:24:21.436 lat (msec) : 500=1.42%, 750=0.09% 00:24:21.436 cpu : usr=0.44%, sys=0.24%, ctx=1728, majf=0, minf=5 00:24:21.436 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 issued rwts: total=480,578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.436 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.436 job91: (groupid=0, jobs=1): err= 0: pid=71790: Mon Jul 22 17:02:22 2024 00:24:21.436 read: IOPS=60, BW=7753KiB/s (7939kB/s)(62.5MiB/8255msec) 00:24:21.436 slat (usec): min=7, max=2521, avg=63.35, stdev=151.04 00:24:21.436 clat (usec): min=5896, max=99733, avg=22758.98, stdev=12644.31 00:24:21.436 lat (usec): min=5925, max=99749, avg=22822.33, stdev=12669.04 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:24:21.436 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 22], 00:24:21.436 | 70.00th=[ 24], 80.00th=[ 29], 90.00th=[ 36], 95.00th=[ 44], 00:24:21.436 | 99.00th=[ 79], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 101], 00:24:21.436 | 99.99th=[ 101] 00:24:21.436 write: IOPS=74, BW=9549KiB/s (9778kB/s)(80.0MiB/8579msec); 0 zone resets 00:24:21.436 slat (usec): min=34, max=1802, avg=138.37, stdev=180.60 00:24:21.436 clat (msec): min=29, max=351, avg=106.32, stdev=42.55 00:24:21.436 lat (msec): min=29, max=351, avg=106.46, stdev=42.57 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 37], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.436 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 99], 00:24:21.436 | 70.00th=[ 111], 80.00th=[ 133], 90.00th=[ 165], 95.00th=[ 190], 00:24:21.436 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 351], 99.95th=[ 351], 00:24:21.436 | 99.99th=[ 351] 00:24:21.436 bw ( KiB/s): min= 256, max=13082, per=0.86%, avg=8190.25, stdev=3870.53, samples=20 00:24:21.436 iops : min= 2, max= 102, avg=63.80, stdev=30.27, samples=20 00:24:21.436 lat (msec) : 10=0.96%, 20=21.23%, 50=20.61%, 100=36.05%, 250=20.61% 00:24:21.436 lat (msec) : 500=0.53% 00:24:21.436 cpu : usr=0.44%, sys=0.26%, ctx=1920, majf=0, minf=1 00:24:21.436 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.436 issued rwts: total=500,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.436 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.436 job92: (groupid=0, jobs=1): err= 0: pid=71791: Mon Jul 22 17:02:22 2024 00:24:21.436 read: IOPS=62, BW=7994KiB/s (8186kB/s)(60.0MiB/7686msec) 00:24:21.436 slat (usec): min=6, max=667, avg=53.89, stdev=83.60 00:24:21.436 clat (msec): min=5, max=173, avg=17.21, stdev=23.23 00:24:21.436 lat (msec): min=5, max=173, avg=17.27, stdev=23.24 00:24:21.436 clat percentiles (msec): 00:24:21.436 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:24:21.436 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.436 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 38], 00:24:21.436 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 174], 99.95th=[ 174], 00:24:21.436 | 99.99th=[ 174] 00:24:21.436 write: IOPS=57, BW=7331KiB/s (7506kB/s)(64.5MiB/9010msec); 0 zone resets 00:24:21.436 slat (usec): min=37, max=7777, avg=183.78, stdev=412.11 00:24:21.436 clat (msec): min=62, max=393, avg=138.75, stdev=61.49 00:24:21.436 lat (msec): min=62, max=393, avg=138.94, stdev=61.50 00:24:21.436 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 78], 20.00th=[ 86], 00:24:21.437 | 30.00th=[ 95], 40.00th=[ 108], 50.00th=[ 129], 60.00th=[ 142], 00:24:21.437 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 207], 95.00th=[ 271], 00:24:21.437 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:24:21.437 | 99.99th=[ 393] 00:24:21.437 bw ( KiB/s): min= 768, max=11264, per=0.68%, avg=6511.55, stdev=2934.35, samples=20 00:24:21.437 iops : min= 6, max= 88, avg=50.60, stdev=23.00, samples=20 00:24:21.437 lat (msec) : 10=14.86%, 20=26.31%, 50=5.42%, 100=18.07%, 250=32.03% 00:24:21.437 lat (msec) : 500=3.31% 00:24:21.437 cpu : usr=0.35%, sys=0.29%, ctx=1658, majf=0, minf=5 00:24:21.437 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 issued rwts: total=480,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.437 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.437 job93: (groupid=0, jobs=1): err= 0: pid=71792: Mon Jul 22 17:02:22 2024 00:24:21.437 read: IOPS=56, BW=7223KiB/s (7396kB/s)(60.0MiB/8506msec) 00:24:21.437 slat (usec): min=8, max=2030, avg=82.95, stdev=196.35 00:24:21.437 clat (msec): min=7, max=152, avg=26.22, stdev=17.83 00:24:21.437 lat (msec): min=7, max=152, avg=26.30, stdev=17.83 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:24:21.437 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 25], 00:24:21.437 | 70.00th=[ 28], 80.00th=[ 33], 90.00th=[ 43], 95.00th=[ 54], 00:24:21.437 | 99.00th=[ 105], 99.50th=[ 121], 99.90th=[ 153], 99.95th=[ 153], 00:24:21.437 | 99.99th=[ 153] 00:24:21.437 write: IOPS=75, BW=9648KiB/s (9879kB/s)(80.0MiB/8491msec); 0 zone resets 00:24:21.437 slat (usec): min=40, max=4163, avg=139.40, stdev=239.54 00:24:21.437 clat (msec): min=20, max=442, avg=105.14, stdev=48.93 00:24:21.437 lat (msec): min=20, max=442, avg=105.28, stdev=48.94 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 30], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:24:21.437 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 96], 00:24:21.437 | 70.00th=[ 107], 80.00th=[ 129], 90.00th=[ 159], 95.00th=[ 186], 00:24:21.437 | 99.00th=[ 321], 99.50th=[ 376], 99.90th=[ 443], 99.95th=[ 443], 00:24:21.437 | 99.99th=[ 443] 00:24:21.437 bw ( KiB/s): min= 1536, max=13312, per=0.90%, avg=8525.95, stdev=4138.62, samples=19 00:24:21.437 iops : min= 12, max= 104, avg=66.47, stdev=32.31, samples=19 00:24:21.437 lat (msec) : 10=0.09%, 20=19.20%, 50=21.88%, 100=39.46%, 250=18.04% 00:24:21.437 lat (msec) : 500=1.34% 00:24:21.437 cpu : usr=0.48%, sys=0.20%, ctx=1816, majf=0, minf=7 00:24:21.437 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.437 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.437 job94: (groupid=0, jobs=1): err= 0: pid=71793: Mon Jul 22 17:02:22 2024 00:24:21.437 read: IOPS=60, BW=7740KiB/s (7926kB/s)(60.0MiB/7938msec) 00:24:21.437 slat (usec): min=7, max=1009, avg=72.65, stdev=128.03 00:24:21.437 clat (usec): min=10324, max=52576, avg=20150.87, stdev=7829.02 00:24:21.437 lat (usec): min=10532, max=52605, avg=20223.52, stdev=7829.73 00:24:21.437 clat percentiles (usec): 00:24:21.437 | 1.00th=[10683], 5.00th=[12256], 10.00th=[13435], 20.00th=[13960], 00:24:21.437 | 30.00th=[14615], 40.00th=[16319], 50.00th=[17695], 60.00th=[19530], 00:24:21.437 | 70.00th=[21627], 80.00th=[25560], 90.00th=[30540], 95.00th=[36963], 00:24:21.437 | 99.00th=[47449], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:24:21.437 | 99.99th=[52691] 00:24:21.437 write: IOPS=71, BW=9155KiB/s (9375kB/s)(79.2MiB/8864msec); 0 zone resets 00:24:21.437 slat (usec): min=39, max=17819, avg=176.87, stdev=727.10 00:24:21.437 clat (msec): min=35, max=508, avg=110.35, stdev=55.27 00:24:21.437 lat (msec): min=36, max=508, avg=110.53, stdev=55.25 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 42], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 79], 00:24:21.437 | 30.00th=[ 83], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 99], 00:24:21.437 | 70.00th=[ 111], 80.00th=[ 128], 90.00th=[ 163], 95.00th=[ 220], 00:24:21.437 | 99.00th=[ 359], 99.50th=[ 397], 99.90th=[ 510], 99.95th=[ 510], 00:24:21.437 | 99.99th=[ 510] 00:24:21.437 bw ( KiB/s): min= 1792, max=12800, per=0.84%, avg=8010.45, stdev=3920.88, samples=20 00:24:21.437 iops : min= 14, max= 100, avg=62.50, stdev=30.56, samples=20 00:24:21.437 lat (msec) : 20=26.75%, 50=16.88%, 100=34.74%, 250=20.11%, 500=1.44% 00:24:21.437 lat (msec) : 750=0.09% 00:24:21.437 cpu : usr=0.45%, sys=0.22%, ctx=1932, majf=0, minf=1 00:24:21.437 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 issued rwts: total=480,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.437 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.437 job95: (groupid=0, jobs=1): err= 0: pid=71794: Mon Jul 22 17:02:22 2024 00:24:21.437 read: IOPS=64, BW=8273KiB/s (8471kB/s)(60.0MiB/7427msec) 00:24:21.437 slat (usec): min=6, max=527, avg=54.91, stdev=85.07 00:24:21.437 clat (msec): min=5, max=155, avg=19.00, stdev=25.31 00:24:21.437 lat (msec): min=5, max=156, avg=19.05, stdev=25.31 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:24:21.437 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:24:21.437 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 26], 95.00th=[ 56], 00:24:21.437 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:24:21.437 | 99.99th=[ 157] 00:24:21.437 write: IOPS=59, BW=7670KiB/s (7854kB/s)(66.8MiB/8912msec); 0 zone resets 00:24:21.437 slat (usec): min=36, max=4640, avg=153.98, stdev=289.40 00:24:21.437 clat (msec): min=66, max=449, avg=132.71, stdev=60.54 00:24:21.437 lat (msec): min=66, max=449, avg=132.86, stdev=60.54 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 73], 5.00th=[ 73], 10.00th=[ 78], 20.00th=[ 84], 00:24:21.437 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 125], 00:24:21.437 | 70.00th=[ 155], 80.00th=[ 178], 90.00th=[ 207], 95.00th=[ 264], 00:24:21.437 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 451], 99.95th=[ 451], 00:24:21.437 | 99.99th=[ 451] 00:24:21.437 bw ( KiB/s): min= 1536, max=11264, per=0.71%, avg=6741.40, stdev=3285.29, samples=20 00:24:21.437 iops : min= 12, max= 88, avg=52.40, stdev=25.58, samples=20 00:24:21.437 lat (msec) : 10=11.14%, 20=28.90%, 50=4.73%, 100=22.09%, 250=30.08% 00:24:21.437 lat (msec) : 500=3.06% 00:24:21.437 cpu : usr=0.34%, sys=0.24%, ctx=1721, majf=0, minf=3 00:24:21.437 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.437 issued rwts: total=480,534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.437 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.437 job96: (groupid=0, jobs=1): err= 0: pid=71795: Mon Jul 22 17:02:22 2024 00:24:21.437 read: IOPS=56, BW=7238KiB/s (7412kB/s)(60.0MiB/8488msec) 00:24:21.437 slat (usec): min=7, max=1385, avg=74.87, stdev=161.87 00:24:21.437 clat (msec): min=5, max=191, avg=27.10, stdev=27.36 00:24:21.437 lat (msec): min=5, max=191, avg=27.18, stdev=27.36 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 15], 00:24:21.437 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 23], 00:24:21.437 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 37], 95.00th=[ 74], 00:24:21.437 | 99.00th=[ 186], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 192], 00:24:21.437 | 99.99th=[ 192] 00:24:21.437 write: IOPS=75, BW=9700KiB/s (9933kB/s)(79.9MiB/8432msec); 0 zone resets 00:24:21.437 slat (usec): min=38, max=3248, avg=143.84, stdev=242.39 00:24:21.437 clat (msec): min=15, max=331, avg=104.96, stdev=44.57 00:24:21.437 lat (msec): min=15, max=331, avg=105.10, stdev=44.59 00:24:21.437 clat percentiles (msec): 00:24:21.437 | 1.00th=[ 19], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:24:21.438 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 99], 00:24:21.438 | 70.00th=[ 109], 80.00th=[ 130], 90.00th=[ 169], 95.00th=[ 188], 00:24:21.438 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 330], 99.95th=[ 330], 00:24:21.438 | 99.99th=[ 330] 00:24:21.438 bw ( KiB/s): min= 2043, max=15134, per=0.94%, avg=8986.67, stdev=3577.60, samples=18 00:24:21.438 iops : min= 15, max= 118, avg=70.00, stdev=28.04, samples=18 00:24:21.438 lat (msec) : 10=0.80%, 20=21.72%, 50=19.30%, 100=35.57%, 250=21.45% 00:24:21.438 lat (msec) : 500=1.16% 00:24:21.438 cpu : usr=0.40%, sys=0.29%, ctx=1827, majf=0, minf=5 00:24:21.438 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 issued rwts: total=480,639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.438 job97: (groupid=0, jobs=1): err= 0: pid=71796: Mon Jul 22 17:02:22 2024 00:24:21.438 read: IOPS=54, BW=6966KiB/s (7134kB/s)(49.1MiB/7221msec) 00:24:21.438 slat (usec): min=6, max=1852, avg=56.86, stdev=128.85 00:24:21.438 clat (msec): min=5, max=174, avg=31.02, stdev=37.28 00:24:21.438 lat (msec): min=5, max=174, avg=31.08, stdev=37.28 00:24:21.438 clat percentiles (msec): 00:24:21.438 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:24:21.438 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 17], 60.00th=[ 22], 00:24:21.438 | 70.00th=[ 27], 80.00th=[ 37], 90.00th=[ 91], 95.00th=[ 142], 00:24:21.438 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 176], 99.95th=[ 176], 00:24:21.438 | 99.99th=[ 176] 00:24:21.438 write: IOPS=56, BW=7245KiB/s (7419kB/s)(60.0MiB/8480msec); 0 zone resets 00:24:21.438 slat (usec): min=40, max=4507, avg=158.12, stdev=320.90 00:24:21.438 clat (msec): min=71, max=626, avg=140.50, stdev=73.47 00:24:21.438 lat (msec): min=71, max=626, avg=140.66, stdev=73.47 00:24:21.438 clat percentiles (msec): 00:24:21.438 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 87], 00:24:21.438 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 117], 60.00th=[ 140], 00:24:21.438 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 203], 95.00th=[ 268], 00:24:21.438 | 99.00th=[ 485], 99.50th=[ 542], 99.90th=[ 625], 99.95th=[ 625], 00:24:21.438 | 99.99th=[ 625] 00:24:21.438 bw ( KiB/s): min= 256, max=12800, per=0.68%, avg=6466.53, stdev=3373.84, samples=19 00:24:21.438 iops : min= 2, max= 100, avg=50.42, stdev=26.35, samples=19 00:24:21.438 lat (msec) : 10=6.64%, 20=19.47%, 50=13.40%, 100=22.22%, 250=34.94% 00:24:21.438 lat (msec) : 500=2.86%, 750=0.46% 00:24:21.438 cpu : usr=0.36%, sys=0.14%, ctx=1430, majf=0, minf=5 00:24:21.438 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 issued rwts: total=393,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.438 job98: (groupid=0, jobs=1): err= 0: pid=71797: Mon Jul 22 17:02:22 2024 00:24:21.438 read: IOPS=59, BW=7555KiB/s (7737kB/s)(60.0MiB/8132msec) 00:24:21.438 slat (usec): min=6, max=2248, avg=70.20, stdev=154.17 00:24:21.438 clat (usec): min=13106, max=65229, avg=22164.85, stdev=8363.50 00:24:21.438 lat (usec): min=13128, max=65238, avg=22235.05, stdev=8362.14 00:24:21.438 clat percentiles (usec): 00:24:21.438 | 1.00th=[13304], 5.00th=[13698], 10.00th=[14091], 20.00th=[15008], 00:24:21.438 | 30.00th=[15926], 40.00th=[17171], 50.00th=[20055], 60.00th=[22152], 00:24:21.438 | 70.00th=[24511], 80.00th=[28705], 90.00th=[32637], 95.00th=[39060], 00:24:21.438 | 99.00th=[45351], 99.50th=[56361], 99.90th=[65274], 99.95th=[65274], 00:24:21.438 | 99.99th=[65274] 00:24:21.438 write: IOPS=71, BW=9124KiB/s (9343kB/s)(77.8MiB/8726msec); 0 zone resets 00:24:21.438 slat (usec): min=37, max=1935, avg=125.45, stdev=189.59 00:24:21.438 clat (msec): min=54, max=411, avg=110.99, stdev=55.64 00:24:21.438 lat (msec): min=54, max=411, avg=111.11, stdev=55.63 00:24:21.438 clat percentiles (msec): 00:24:21.438 | 1.00th=[ 63], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.438 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 94], 60.00th=[ 101], 00:24:21.438 | 70.00th=[ 113], 80.00th=[ 131], 90.00th=[ 167], 95.00th=[ 236], 00:24:21.438 | 99.00th=[ 334], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 414], 00:24:21.438 | 99.99th=[ 414] 00:24:21.438 bw ( KiB/s): min= 1792, max=13082, per=0.92%, avg=8744.72, stdev=3377.54, samples=18 00:24:21.438 iops : min= 14, max= 102, avg=68.17, stdev=26.32, samples=18 00:24:21.438 lat (msec) : 20=21.51%, 50=21.78%, 100=33.85%, 250=20.24%, 500=2.63% 00:24:21.438 cpu : usr=0.46%, sys=0.21%, ctx=1794, majf=0, minf=3 00:24:21.438 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 issued rwts: total=480,622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.438 job99: (groupid=0, jobs=1): err= 0: pid=71798: Mon Jul 22 17:02:22 2024 00:24:21.438 read: IOPS=64, BW=8216KiB/s (8413kB/s)(60.0MiB/7478msec) 00:24:21.438 slat (usec): min=5, max=1763, avg=58.85, stdev=120.44 00:24:21.438 clat (usec): min=7442, max=82764, avg=17196.05, stdev=10852.81 00:24:21.438 lat (usec): min=7466, max=82771, avg=17254.91, stdev=10856.62 00:24:21.438 clat percentiles (usec): 00:24:21.438 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:24:21.438 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13173], 60.00th=[15533], 00:24:21.438 | 70.00th=[16909], 80.00th=[19006], 90.00th=[28705], 95.00th=[41157], 00:24:21.438 | 99.00th=[62653], 99.50th=[73925], 99.90th=[82314], 99.95th=[82314], 00:24:21.438 | 99.99th=[82314] 00:24:21.438 write: IOPS=67, BW=8601KiB/s (8807kB/s)(75.8MiB/9019msec); 0 zone resets 00:24:21.438 slat (usec): min=38, max=1271, avg=125.63, stdev=146.14 00:24:21.438 clat (msec): min=46, max=346, avg=118.10, stdev=58.40 00:24:21.438 lat (msec): min=46, max=347, avg=118.23, stdev=58.40 00:24:21.438 clat percentiles (msec): 00:24:21.438 | 1.00th=[ 53], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:24:21.438 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 102], 00:24:21.438 | 70.00th=[ 123], 80.00th=[ 163], 90.00th=[ 209], 95.00th=[ 245], 00:24:21.438 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 347], 00:24:21.438 | 99.99th=[ 347] 00:24:21.438 bw ( KiB/s): min= 1792, max=13056, per=0.80%, avg=7651.25, stdev=3872.91, samples=20 00:24:21.438 iops : min= 14, max= 102, avg=59.65, stdev=30.17, samples=20 00:24:21.438 lat (msec) : 10=4.42%, 20=31.40%, 50=7.46%, 100=33.89%, 250=20.26% 00:24:21.438 lat (msec) : 500=2.58% 00:24:21.438 cpu : usr=0.36%, sys=0.28%, ctx=1738, majf=0, minf=3 00:24:21.438 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.438 issued rwts: total=480,606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.438 00:24:21.438 Run status group 0 (all jobs): 00:24:21.438 READ: bw=785MiB/s (823MB/s), 5862KiB/s-12.1MiB/s (6003kB/s-12.7MB/s), io=7290MiB (7644MB), run=6673-9291msec 00:24:21.438 WRITE: bw=929MiB/s (974MB/s), 6975KiB/s-13.2MiB/s (7142kB/s-13.9MB/s), io=8569MiB (8986MB), run=7799-9228msec 00:24:21.438 00:24:21.438 Disk stats (read/write): 00:24:21.438 sdc: ios=468/480, merge=0/0, ticks=17170/60006, in_queue=77177, util=72.26% 00:24:21.438 sdf: ios=514/552, merge=0/0, ticks=11776/63947, in_queue=75723, util=72.09% 00:24:21.438 sdi: ios=514/524, merge=0/0, ticks=9668/66582, in_queue=76251, util=72.18% 00:24:21.438 sdk: ios=516/640, merge=0/0, ticks=7639/70171, in_queue=77811, util=72.30% 00:24:21.438 sdm: ios=515/625, merge=0/0, ticks=13773/63601, in_queue=77375, util=72.70% 00:24:21.438 sds: ios=514/480, merge=0/0, ticks=8259/69493, in_queue=77753, util=72.62% 00:24:21.438 sdy: ios=515/586, merge=0/0, ticks=13139/62889, in_queue=76029, util=72.61% 00:24:21.438 sdad: ios=515/608, merge=0/0, ticks=14132/62109, in_queue=76242, util=73.17% 00:24:21.438 sdaf: ios=482/625, merge=0/0, ticks=9362/67886, in_queue=77249, util=73.80% 00:24:21.438 sdaj: ios=481/615, merge=0/0, ticks=12657/63604, in_queue=76262, util=74.03% 00:24:21.438 sdd: ios=802/819, merge=0/0, ticks=13745/62384, in_queue=76130, util=74.17% 00:24:21.438 sdj: ios=802/860, merge=0/0, ticks=12413/64214, in_queue=76628, util=75.01% 00:24:21.438 sdn: ios=749/800, merge=0/0, ticks=11059/65940, in_queue=76999, util=75.21% 00:24:21.438 sdq: ios=641/726, merge=0/0, ticks=12555/65003, in_queue=77559, util=75.53% 00:24:21.438 sdv: ios=802/846, merge=0/0, ticks=11840/64661, in_queue=76502, util=75.77% 00:24:21.438 sdz: ios=802/865, merge=0/0, ticks=9060/68173, in_queue=77234, util=76.16% 00:24:21.438 sdac: ios=642/733, merge=0/0, ticks=14617/61842, in_queue=76459, util=75.69% 00:24:21.438 sdag: ios=802/882, merge=0/0, ticks=11152/65635, in_queue=76788, util=76.05% 00:24:21.438 sdam: ios=642/776, merge=0/0, ticks=12382/64405, in_queue=76787, util=76.28% 00:24:21.438 sdan: ios=640/730, merge=0/0, ticks=11319/66494, in_queue=77813, util=76.45% 00:24:21.438 sdg: ios=838/902, merge=0/0, ticks=10397/67839, in_queue=78236, util=76.73% 00:24:21.438 sdp: ios=802/874, merge=0/0, ticks=9802/67052, in_queue=76855, util=77.16% 00:24:21.438 sdu: ios=840/889, merge=0/0, ticks=9515/68334, in_queue=77850, util=77.26% 00:24:21.438 sdw: ios=641/774, merge=0/0, ticks=10315/66992, in_queue=77307, util=77.08% 00:24:21.438 sdab: ios=802/800, merge=0/0, ticks=13958/62681, in_queue=76639, util=77.25% 00:24:21.438 sdah: ios=837/904, merge=0/0, ticks=10197/67866, in_queue=78063, util=77.92% 00:24:21.438 sdal: ios=836/833, merge=0/0, ticks=14913/62355, in_queue=77268, util=77.79% 00:24:21.438 sdap: ios=642/784, merge=0/0, ticks=9046/68443, in_queue=77489, util=78.07% 00:24:21.438 sdar: ios=836/804, merge=0/0, ticks=15052/61427, in_queue=76479, util=78.29% 00:24:21.438 sdau: ios=641/782, merge=0/0, ticks=6227/71568, in_queue=77795, util=78.34% 00:24:21.438 sdae: ios=481/626, merge=0/0, ticks=10860/66120, in_queue=76981, util=78.78% 00:24:21.438 sdai: ios=480/480, merge=0/0, ticks=13555/63644, in_queue=77199, util=78.81% 00:24:21.438 sdak: ios=480/558, merge=0/0, ticks=6522/69580, in_queue=76102, util=78.92% 00:24:21.438 sdao: ios=481/604, merge=0/0, ticks=12399/64155, in_queue=76555, util=79.17% 00:24:21.438 sdaq: ios=482/638, merge=0/0, ticks=6460/71122, in_queue=77582, util=79.42% 00:24:21.438 sdas: ios=482/625, merge=0/0, ticks=9230/67967, in_queue=77198, util=79.56% 00:24:21.438 sdat: ios=367/480, merge=0/0, ticks=11163/66665, in_queue=77829, util=79.65% 00:24:21.438 sdav: ios=481/624, merge=0/0, ticks=9870/66963, in_queue=76833, util=80.06% 00:24:21.439 sdaw: ios=481/605, merge=0/0, ticks=11010/65707, in_queue=76718, util=80.51% 00:24:21.439 sdax: ios=481/544, merge=0/0, ticks=11842/64932, in_queue=76774, util=80.65% 00:24:21.439 sday: ios=480/480, merge=0/0, ticks=14194/62496, in_queue=76690, util=80.91% 00:24:21.439 sdaz: ios=320/479, merge=0/0, ticks=11856/65977, in_queue=77833, util=81.05% 00:24:21.439 sdbb: ios=482/638, merge=0/0, ticks=9249/67845, in_queue=77094, util=81.22% 00:24:21.439 sdbc: ios=481/596, merge=0/0, ticks=13839/61964, in_queue=75804, util=81.24% 00:24:21.439 sdbf: ios=481/618, merge=0/0, ticks=11058/65509, in_queue=76567, util=81.79% 00:24:21.439 sdbg: ios=482/639, merge=0/0, ticks=9469/67368, in_queue=76837, util=81.64% 00:24:21.439 sdbi: ios=480/546, merge=0/0, ticks=11663/64141, in_queue=75804, util=81.99% 00:24:21.439 sdbl: ios=480/500, merge=0/0, ticks=9461/66499, in_queue=75961, util=82.37% 00:24:21.439 sdbo: ios=481/608, merge=0/0, ticks=12896/62870, in_queue=75766, util=82.59% 00:24:21.439 sdbr: ios=481/607, merge=0/0, ticks=12950/62446, in_queue=75396, util=82.89% 00:24:21.439 sdba: ios=641/767, merge=0/0, ticks=10817/66111, in_queue=76929, util=83.14% 00:24:21.439 sdbd: ios=640/719, merge=0/0, ticks=14897/63047, in_queue=77945, util=83.70% 00:24:21.439 sdbe: ios=839/866, merge=0/0, ticks=12877/64683, in_queue=77560, util=84.03% 00:24:21.439 sdbh: ios=642/732, merge=0/0, ticks=10142/66991, in_queue=77133, util=83.97% 00:24:21.439 sdbk: ios=802/847, merge=0/0, ticks=11920/64579, in_queue=76500, util=84.46% 00:24:21.439 sdbm: ios=802/848, merge=0/0, ticks=11640/64905, in_queue=76545, util=84.70% 00:24:21.439 sdbp: ios=767/800, merge=0/0, ticks=11139/65911, in_queue=77051, util=85.11% 00:24:21.439 sdbq: ios=802/825, merge=0/0, ticks=12639/63231, in_queue=75871, util=85.21% 00:24:21.439 sdbt: ios=802/843, merge=0/0, ticks=11241/64632, in_queue=75873, util=85.45% 00:24:21.439 sdbv: ios=803/802, merge=0/0, ticks=11430/64876, in_queue=76307, util=85.73% 00:24:21.439 sdbj: ios=839/891, merge=0/0, ticks=11520/65013, in_queue=76533, util=85.96% 00:24:21.439 sdbn: ios=802/863, merge=0/0, ticks=10326/66063, in_queue=76389, util=86.23% 00:24:21.439 sdbs: ios=642/755, merge=0/0, ticks=11286/66449, in_queue=77736, util=86.07% 00:24:21.439 sdbu: ios=802/878, merge=0/0, ticks=10350/66318, in_queue=76669, util=86.16% 00:24:21.439 sdbw: ios=802/857, merge=0/0, ticks=10121/66179, in_queue=76301, util=86.67% 00:24:21.439 sdbx: ios=802/881, merge=0/0, ticks=10864/65278, in_queue=76142, util=86.91% 00:24:21.439 sdby: ios=802/811, merge=0/0, ticks=9713/66478, in_queue=76191, util=87.29% 00:24:21.439 sdcb: ios=840/802, merge=0/0, ticks=10863/65410, in_queue=76273, util=87.93% 00:24:21.439 sdce: ios=802/850, merge=0/0, ticks=9785/67228, in_queue=77013, util=88.11% 00:24:21.439 sdcg: ios=641/747, merge=0/0, ticks=15914/61875, in_queue=77790, util=88.01% 00:24:21.439 sdca: ios=331/480, merge=0/0, ticks=8526/67789, in_queue=76315, util=87.38% 00:24:21.439 sdcc: ios=481/622, merge=0/0, ticks=12214/64698, in_queue=76912, util=89.01% 00:24:21.439 sdcf: ios=480/566, merge=0/0, ticks=8245/68758, in_queue=77004, util=89.33% 00:24:21.439 sdci: ios=448/480, merge=0/0, ticks=14247/63645, in_queue=77893, util=89.43% 00:24:21.439 sdck: ios=480/579, merge=0/0, ticks=8217/68836, in_queue=77054, util=89.46% 00:24:21.439 sdcm: ios=481/606, merge=0/0, ticks=10131/66352, in_queue=76483, util=90.13% 00:24:21.439 sdcp: ios=480/600, merge=0/0, ticks=8948/67797, in_queue=76745, util=90.39% 00:24:21.439 sdcr: ios=480/593, merge=0/0, ticks=9376/66532, in_queue=75909, util=90.59% 00:24:21.439 sdct: ios=481/628, merge=0/0, ticks=12932/64086, in_queue=77019, util=91.14% 00:24:21.439 sdcu: ios=481/630, merge=0/0, ticks=12825/63590, in_queue=76416, util=91.62% 00:24:21.439 sdbz: ios=481/602, merge=0/0, ticks=12271/63261, in_queue=75533, util=91.57% 00:24:21.439 sdcd: ios=324/480, merge=0/0, ticks=15449/60460, in_queue=75909, util=92.28% 00:24:21.439 sdch: ios=495/640, merge=0/0, ticks=9278/67953, in_queue=77232, util=92.93% 00:24:21.439 sdcj: ios=480/597, merge=0/0, ticks=9510/66184, in_queue=75694, util=93.31% 00:24:21.439 sdcl: ios=480/495, merge=0/0, ticks=12406/64451, in_queue=76858, util=93.42% 00:24:21.439 sdcn: ios=480/577, merge=0/0, ticks=13410/61671, in_queue=75081, util=93.69% 00:24:21.439 sdco: ios=320/469, merge=0/0, ticks=13857/63947, in_queue=77805, util=94.10% 00:24:21.439 sdcq: ios=481/611, merge=0/0, ticks=11937/63348, in_queue=75285, util=94.38% 00:24:21.439 sdcs: ios=482/629, merge=0/0, ticks=10029/66914, in_queue=76943, util=94.38% 00:24:21.439 sdcv: ios=481/621, merge=0/0, ticks=11635/64132, in_queue=75767, util=94.73% 00:24:21.439 sda: ios=480/567, merge=0/0, ticks=14103/62344, in_queue=76448, util=95.30% 00:24:21.439 sdb: ios=484/640, merge=0/0, ticks=10336/66645, in_queue=76981, util=95.45% 00:24:21.439 sde: ios=480/501, merge=0/0, ticks=8096/68884, in_queue=76981, util=95.58% 00:24:21.439 sdh: ios=481/628, merge=0/0, ticks=12370/64320, in_queue=76690, util=96.42% 00:24:21.439 sdl: ios=480/622, merge=0/0, ticks=9473/66664, in_queue=76138, util=96.80% 00:24:21.439 sdo: ios=480/519, merge=0/0, ticks=8966/68233, in_queue=77200, util=96.72% 00:24:21.439 sdr: ios=481/630, merge=0/0, ticks=12823/64936, in_queue=77760, util=97.40% 00:24:21.439 sdt: ios=359/480, merge=0/0, ticks=11204/66702, in_queue=77907, util=97.84% 00:24:21.439 sdx: ios=481/610, merge=0/0, ticks=10413/65755, in_queue=76169, util=98.20% 00:24:21.439 sdaa: ios=480/592, merge=0/0, ticks=8095/68131, in_queue=76227, util=98.70% 00:24:21.439 [2024-07-22 17:02:22.815637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.818193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.821089] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.823433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.825743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.828349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.831343] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.835975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.838672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:21.439 [2024-07-22 17:02:22.843043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.846030] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.848643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.851644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.854899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.858990] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.862047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.865170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:24:21.439 [2024-07-22 17:02:22.867615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:24:21.439 [2024-07-22 17:02:22.870119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:24:21.439 Cleaning up iSCSI connection 00:24:21.439 17:02:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:24:21.439 [2024-07-22 17:02:22.872418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.874773] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.877363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.879835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.882524] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.885086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.888814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.892685] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.896551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.900946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.903520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.907126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.909717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.912803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.915515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.917759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.920338] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.922729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.439 [2024-07-22 17:02:22.925554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.928014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.932055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.934865] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.938773] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.943670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.947256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.949925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.952604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.955185] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.959027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.962980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.964796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.967037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.968959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.970836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.987799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.996612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:22.999704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.002600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.005103] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.009152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.011207] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.015752] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.440 [2024-07-22 17:02:23.018857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.022083] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.027394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.033725] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.038905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.045054] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.049251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.052934] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.055087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.088104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:21.698 [2024-07-22 17:02:23.095603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:22.639 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:24:22.639 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:24:22.639 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:24:22.639 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 68654 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 68654 ']' 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 68654 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68654 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.639 killing process with pid 68654 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68654' 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 68654 00:24:22.639 17:02:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 68654 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:24:29.206 00:24:29.206 real 1m12.141s 00:24:29.206 user 4m56.466s 00:24:29.206 sys 0m26.776s 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:29.206 ************************************ 00:24:29.206 END TEST iscsi_tgt_iscsi_lvol 00:24:29.206 ************************************ 00:24:29.206 17:02:30 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:24:29.206 17:02:30 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:24:29.206 17:02:30 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:29.206 17:02:30 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.206 17:02:30 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:24:29.206 ************************************ 00:24:29.206 START TEST iscsi_tgt_fio 00:24:29.206 ************************************ 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:24:29.206 * Looking for test storage... 00:24:29.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:29.206 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=73290 00:24:29.207 Process pid: 73290 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 73290' 00:24:29.207 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 73290 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 73290 ']' 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.208 17:02:30 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:24:29.208 [2024-07-22 17:02:30.341455] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:29.208 [2024-07-22 17:02:30.341684] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73290 ] 00:24:29.208 [2024-07-22 17:02:30.522815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.474 [2024-07-22 17:02:30.836719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.732 17:02:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.732 17:02:31 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:24:29.732 17:02:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:31.110 iscsi_tgt is listening. Running tests... 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:24:31.110 17:02:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:24:31.368 17:02:32 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:24:31.934 17:02:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:24:31.934 17:02:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:24:32.193 17:02:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:24:32.193 17:02:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:32.193 17:02:33 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:24:33.571 17:02:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:24:33.571 17:02:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:24:33.828 17:02:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:24:35.204 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:35.204 [2024-07-22 17:02:36.462885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:35.204 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:35.204 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=1 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 2 ']' 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:24:35.204 [2024-07-22 17:02:36.483484] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:24:35.204 17:02:36 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:24:35.204 [global] 00:24:35.204 thread=1 00:24:35.204 invalidate=1 00:24:35.204 rw=randrw 00:24:35.204 time_based=1 00:24:35.204 runtime=1 00:24:35.204 ioengine=libaio 00:24:35.204 direct=1 00:24:35.204 bs=4096 00:24:35.204 iodepth=1 00:24:35.204 norandommap=0 00:24:35.204 numjobs=1 00:24:35.204 00:24:35.204 verify_dump=1 00:24:35.204 verify_backlog=512 00:24:35.204 verify_state_save=0 00:24:35.204 do_verify=1 00:24:35.204 verify=crc32c-intel 00:24:35.204 [job0] 00:24:35.204 filename=/dev/sda 00:24:35.204 [job1] 00:24:35.204 filename=/dev/sdb 00:24:35.204 queue_depth set to 113 (sda) 00:24:35.204 queue_depth set to 113 (sdb) 00:24:35.204 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:35.204 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:35.204 fio-3.35 00:24:35.204 Starting 2 threads 00:24:35.204 [2024-07-22 17:02:36.804841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:35.204 [2024-07-22 17:02:36.808165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:36.595 [2024-07-22 17:02:37.915946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:36.595 [2024-07-22 17:02:37.919543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:36.595 00:24:36.595 job0: (groupid=0, jobs=1): err= 0: pid=73456: Mon Jul 22 17:02:37 2024 00:24:36.595 read: IOPS=3467, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1001msec) 00:24:36.595 slat (nsec): min=3772, max=50947, avg=8311.96, stdev=3352.94 00:24:36.595 clat (usec): min=116, max=425, avg=171.61, stdev=26.48 00:24:36.595 lat (usec): min=124, max=476, avg=179.92, stdev=27.92 00:24:36.596 clat percentiles (usec): 00:24:36.596 | 1.00th=[ 128], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:24:36.596 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:24:36.596 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 208], 95.00th=[ 223], 00:24:36.596 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 347], 99.95th=[ 379], 00:24:36.596 | 99.99th=[ 424] 00:24:36.596 bw ( KiB/s): min= 7744, max= 7744, per=28.07%, avg=7744.00, stdev= 0.00, samples=1 00:24:36.596 iops : min= 1936, max= 1936, avg=1936.00, stdev= 0.00, samples=1 00:24:36.596 write: IOPS=2035, BW=8144KiB/s (8339kB/s)(8152KiB/1001msec); 0 zone resets 00:24:36.596 slat (nsec): min=4823, max=46291, avg=9502.62, stdev=3401.64 00:24:36.596 clat (usec): min=113, max=344, avg=170.91, stdev=30.32 00:24:36.596 lat (usec): min=122, max=372, avg=180.41, stdev=31.70 00:24:36.596 clat percentiles (usec): 00:24:36.596 | 1.00th=[ 121], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 149], 00:24:36.596 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:24:36.596 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 233], 00:24:36.596 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 318], 00:24:36.596 | 99.99th=[ 347] 00:24:36.596 bw ( KiB/s): min= 8192, max= 8192, per=50.95%, avg=8192.00, stdev= 0.00, samples=1 00:24:36.596 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:36.596 lat (usec) : 250=98.15%, 500=1.85% 00:24:36.596 cpu : usr=2.00%, sys=6.70%, ctx=5509, majf=0, minf=9 00:24:36.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.596 issued rwts: total=3471,2038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:36.596 job1: (groupid=0, jobs=1): err= 0: pid=73459: Mon Jul 22 17:02:37 2024 00:24:36.596 read: IOPS=3430, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1001msec) 00:24:36.596 slat (nsec): min=3378, max=44072, avg=7154.79, stdev=4280.86 00:24:36.596 clat (usec): min=82, max=435, avg=171.90, stdev=28.38 00:24:36.596 lat (usec): min=90, max=479, avg=179.06, stdev=30.66 00:24:36.596 clat percentiles (usec): 00:24:36.596 | 1.00th=[ 122], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:24:36.596 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:24:36.596 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 208], 95.00th=[ 229], 00:24:36.596 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 355], 99.95th=[ 408], 00:24:36.596 | 99.99th=[ 437] 00:24:36.596 bw ( KiB/s): min= 7512, max= 7512, per=27.22%, avg=7512.00, stdev= 0.00, samples=1 00:24:36.596 iops : min= 1878, max= 1878, avg=1878.00, stdev= 0.00, samples=1 00:24:36.596 write: IOPS=1984, BW=7936KiB/s (8127kB/s)(7944KiB/1001msec); 0 zone resets 00:24:36.596 slat (nsec): min=4535, max=97873, avg=8891.74, stdev=5207.81 00:24:36.596 clat (usec): min=103, max=572, avg=181.02, stdev=36.52 00:24:36.596 lat (usec): min=113, max=586, avg=189.92, stdev=38.89 00:24:36.596 clat percentiles (usec): 00:24:36.596 | 1.00th=[ 117], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:24:36.596 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:24:36.596 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 227], 95.00th=[ 258], 00:24:36.596 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 412], 99.95th=[ 570], 00:24:36.596 | 99.99th=[ 570] 00:24:36.596 bw ( KiB/s): min= 8192, max= 8192, per=50.95%, avg=8192.00, stdev= 0.00, samples=1 00:24:36.596 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:36.596 lat (usec) : 100=0.26%, 250=96.11%, 500=3.62%, 750=0.02% 00:24:36.596 cpu : usr=3.00%, sys=4.60%, ctx=5420, majf=0, minf=5 00:24:36.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.596 issued rwts: total=3434,1986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:36.596 00:24:36.596 Run status group 0 (all jobs): 00:24:36.596 READ: bw=26.9MiB/s (28.3MB/s), 13.4MiB/s-13.5MiB/s (14.1MB/s-14.2MB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:24:36.596 WRITE: bw=15.7MiB/s (16.5MB/s), 7936KiB/s-8144KiB/s (8127kB/s-8339kB/s), io=15.7MiB (16.5MB), run=1001-1001msec 00:24:36.596 00:24:36.596 Disk stats (read/write): 00:24:36.596 sda: ios=3239/1713, merge=0/0, ticks=562/288, in_queue=851, util=91.10% 00:24:36.596 sdb: ios=3182/1714, merge=0/0, ticks=553/303, in_queue=857, util=91.29% 00:24:36.596 17:02:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:24:36.596 [global] 00:24:36.596 thread=1 00:24:36.596 invalidate=1 00:24:36.596 rw=randrw 00:24:36.596 time_based=1 00:24:36.596 runtime=1 00:24:36.596 ioengine=libaio 00:24:36.596 direct=1 00:24:36.596 bs=131072 00:24:36.596 iodepth=32 00:24:36.596 norandommap=0 00:24:36.596 numjobs=1 00:24:36.596 00:24:36.596 verify_dump=1 00:24:36.596 verify_backlog=512 00:24:36.596 verify_state_save=0 00:24:36.596 do_verify=1 00:24:36.596 verify=crc32c-intel 00:24:36.596 [job0] 00:24:36.596 filename=/dev/sda 00:24:36.596 [job1] 00:24:36.596 filename=/dev/sdb 00:24:36.596 queue_depth set to 113 (sda) 00:24:36.596 queue_depth set to 113 (sdb) 00:24:36.596 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:24:36.596 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:24:36.596 fio-3.35 00:24:36.596 Starting 2 threads 00:24:36.596 [2024-07-22 17:02:38.146597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:36.596 [2024-07-22 17:02:38.150563] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:37.985 [2024-07-22 17:02:39.290926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:37.985 [2024-07-22 17:02:39.294731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:37.985 00:24:37.985 job0: (groupid=0, jobs=1): err= 0: pid=73523: Mon Jul 22 17:02:39 2024 00:24:37.985 read: IOPS=1222, BW=153MiB/s (160MB/s)(156MiB/1022msec) 00:24:37.985 slat (usec): min=8, max=109, avg=29.47, stdev=12.64 00:24:37.985 clat (usec): min=1524, max=23618, avg=5963.24, stdev=4234.39 00:24:37.985 lat (usec): min=1548, max=23639, avg=5992.71, stdev=4232.88 00:24:37.985 clat percentiles (usec): 00:24:37.985 | 1.00th=[ 1827], 5.00th=[ 1958], 10.00th=[ 2089], 20.00th=[ 2245], 00:24:37.985 | 30.00th=[ 2376], 40.00th=[ 2507], 50.00th=[ 2835], 60.00th=[ 7635], 00:24:37.985 | 70.00th=[ 9503], 80.00th=[10945], 90.00th=[11469], 95.00th=[12125], 00:24:37.985 | 99.00th=[16712], 99.50th=[17957], 99.90th=[22938], 99.95th=[23725], 00:24:37.985 | 99.99th=[23725] 00:24:37.985 bw ( KiB/s): min=75776, max=111872, per=30.34%, avg=93824.00, stdev=25523.73, samples=2 00:24:37.985 iops : min= 592, max= 874, avg=733.00, stdev=199.40, samples=2 00:24:37.985 write: IOPS=749, BW=93.7MiB/s (98.2MB/s)(95.8MiB/1022msec); 0 zone resets 00:24:37.985 slat (usec): min=41, max=174, avg=95.94, stdev=20.30 00:24:37.985 clat (usec): min=5567, max=89052, avg=32693.38, stdev=6297.45 00:24:37.985 lat (usec): min=5669, max=89173, avg=32789.32, stdev=6300.50 00:24:37.985 clat percentiles (usec): 00:24:37.985 | 1.00th=[13829], 5.00th=[27657], 10.00th=[29230], 20.00th=[30540], 00:24:37.985 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:24:37.985 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35914], 95.00th=[36963], 00:24:37.985 | 99.00th=[62129], 99.50th=[78119], 99.90th=[88605], 99.95th=[88605], 00:24:37.985 | 99.99th=[88605] 00:24:37.985 bw ( KiB/s): min=75264, max=113920, per=49.49%, avg=94592.00, stdev=27333.92, samples=2 00:24:37.985 iops : min= 588, max= 890, avg=739.00, stdev=213.55, samples=2 00:24:37.985 lat (msec) : 2=3.72%, 4=30.17%, 10=11.17%, 20=17.52%, 50=36.87% 00:24:37.985 lat (msec) : 100=0.55% 00:24:37.985 cpu : usr=8.33%, sys=7.54%, ctx=1342, majf=0, minf=5 00:24:37.985 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0% 00:24:37.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.985 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:24:37.985 issued rwts: total=1249,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.985 latency : target=0, window=0, percentile=100.00%, depth=32 00:24:37.985 job1: (groupid=0, jobs=1): err= 0: pid=73524: Mon Jul 22 17:02:39 2024 00:24:37.985 read: IOPS=1193, BW=149MiB/s (156MB/s)(153MiB/1022msec) 00:24:37.985 slat (usec): min=11, max=118, avg=31.20, stdev=13.81 00:24:37.985 clat (usec): min=1611, max=22546, avg=6169.79, stdev=4527.09 00:24:37.985 lat (usec): min=1640, max=22570, avg=6200.99, stdev=4525.92 00:24:37.985 clat percentiles (usec): 00:24:37.985 | 1.00th=[ 1844], 5.00th=[ 1958], 10.00th=[ 2057], 20.00th=[ 2212], 00:24:37.985 | 30.00th=[ 2343], 40.00th=[ 2507], 50.00th=[ 2835], 60.00th=[ 7832], 00:24:37.985 | 70.00th=[10421], 80.00th=[11469], 90.00th=[12125], 95.00th=[13173], 00:24:37.985 | 99.00th=[15008], 99.50th=[17957], 99.90th=[21890], 99.95th=[22676], 00:24:37.985 | 99.99th=[22676] 00:24:37.985 bw ( KiB/s): min=73216, max=107520, per=29.22%, avg=90368.00, stdev=24256.59, samples=2 00:24:37.985 iops : min= 572, max= 840, avg=706.00, stdev=189.50, samples=2 00:24:37.985 write: IOPS=743, BW=93.0MiB/s (97.5MB/s)(95.0MiB/1022msec); 0 zone resets 00:24:37.985 slat (usec): min=55, max=199, avg=97.77, stdev=19.16 00:24:37.985 clat (usec): min=5226, max=87322, avg=32849.13, stdev=6278.46 00:24:37.985 lat (usec): min=5343, max=87438, avg=32946.90, stdev=6281.40 00:24:37.985 clat percentiles (usec): 00:24:37.985 | 1.00th=[11863], 5.00th=[27395], 10.00th=[29492], 20.00th=[30540], 00:24:37.985 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32375], 60.00th=[33162], 00:24:37.985 | 70.00th=[33817], 80.00th=[34866], 90.00th=[35914], 95.00th=[37487], 00:24:37.985 | 99.00th=[61080], 99.50th=[71828], 99.90th=[87557], 99.95th=[87557], 00:24:37.985 | 99.99th=[87557] 00:24:37.985 bw ( KiB/s): min=72960, max=114176, per=48.96%, avg=93568.00, stdev=29144.11, samples=2 00:24:37.985 iops : min= 570, max= 892, avg=731.00, stdev=227.69, samples=2 00:24:37.985 lat (msec) : 2=3.89%, 4=30.10%, 10=8.48%, 20=19.90%, 50=36.92% 00:24:37.985 lat (msec) : 100=0.71% 00:24:37.985 cpu : usr=8.52%, sys=7.44%, ctx=1447, majf=0, minf=5 00:24:37.985 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.4%, >=64=0.0% 00:24:37.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.985 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:24:37.985 issued rwts: total=1220,760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.985 latency : target=0, window=0, percentile=100.00%, depth=32 00:24:37.985 00:24:37.985 Run status group 0 (all jobs): 00:24:37.985 READ: bw=302MiB/s (317MB/s), 149MiB/s-153MiB/s (156MB/s-160MB/s), io=309MiB (324MB), run=1022-1022msec 00:24:37.985 WRITE: bw=187MiB/s (196MB/s), 93.0MiB/s-93.7MiB/s (97.5MB/s-98.2MB/s), io=191MiB (200MB), run=1022-1022msec 00:24:37.985 00:24:37.985 Disk stats (read/write): 00:24:37.985 sda: ios=1171/628, merge=0/0, ticks=7044/20350, in_queue=27394, util=90.27% 00:24:37.985 sdb: ios=1154/622, merge=0/0, ticks=7215/20184, in_queue=27399, util=90.63% 00:24:37.985 17:02:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:24:37.985 [global] 00:24:37.985 thread=1 00:24:37.985 invalidate=1 00:24:37.985 rw=randrw 00:24:37.985 time_based=1 00:24:37.985 runtime=1 00:24:37.985 ioengine=libaio 00:24:37.985 direct=1 00:24:37.985 bs=524288 00:24:37.985 iodepth=128 00:24:37.985 norandommap=0 00:24:37.985 numjobs=1 00:24:37.985 00:24:37.985 verify_dump=1 00:24:37.985 verify_backlog=512 00:24:37.985 verify_state_save=0 00:24:37.985 do_verify=1 00:24:37.985 verify=crc32c-intel 00:24:37.985 [job0] 00:24:37.985 filename=/dev/sda 00:24:37.985 [job1] 00:24:37.985 filename=/dev/sdb 00:24:37.985 queue_depth set to 113 (sda) 00:24:37.985 queue_depth set to 113 (sdb) 00:24:37.985 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:24:37.985 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:24:37.985 fio-3.35 00:24:37.985 Starting 2 threads 00:24:37.985 [2024-07-22 17:02:39.526852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:37.985 [2024-07-22 17:02:39.530584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:39.367 [2024-07-22 17:02:40.712765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:39.626 [2024-07-22 17:02:41.007770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:39.626 00:24:39.626 job0: (groupid=0, jobs=1): err= 0: pid=73586: Mon Jul 22 17:02:41 2024 00:24:39.626 read: IOPS=245, BW=123MiB/s (129MB/s)(163MiB/1323msec) 00:24:39.626 slat (usec): min=21, max=39132, avg=1465.89, stdev=4158.22 00:24:39.626 clat (msec): min=109, max=430, avg=249.45, stdev=100.55 00:24:39.626 lat (msec): min=109, max=430, avg=250.92, stdev=100.50 00:24:39.626 clat percentiles (msec): 00:24:39.626 | 1.00th=[ 118], 5.00th=[ 128], 10.00th=[ 169], 20.00th=[ 184], 00:24:39.626 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 199], 60.00th=[ 209], 00:24:39.626 | 70.00th=[ 275], 80.00th=[ 393], 90.00th=[ 430], 95.00th=[ 430], 00:24:39.626 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:24:39.626 | 99.99th=[ 430] 00:24:39.626 bw ( KiB/s): min=115481, max=131832, per=60.29%, avg=123656.50, stdev=11561.90, samples=2 00:24:39.626 iops : min= 225, max= 257, avg=241.00, stdev=22.63, samples=2 00:24:39.626 write: IOPS=307, BW=154MiB/s (161MB/s)(135MiB/879msec); 0 zone resets 00:24:39.626 slat (usec): min=156, max=14024, avg=1531.25, stdev=2923.77 00:24:39.626 clat (msec): min=120, max=353, avg=223.55, stdev=44.39 00:24:39.626 lat (msec): min=120, max=353, avg=225.08, stdev=44.61 00:24:39.626 clat percentiles (msec): 00:24:39.626 | 1.00th=[ 128], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 205], 00:24:39.626 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 224], 00:24:39.626 | 70.00th=[ 236], 80.00th=[ 245], 90.00th=[ 284], 95.00th=[ 321], 00:24:39.626 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:24:39.626 | 99.99th=[ 355] 00:24:39.626 bw ( KiB/s): min=115481, max=160447, per=57.31%, avg=137964.00, stdev=31795.76, samples=2 00:24:39.626 iops : min= 225, max= 313, avg=269.00, stdev=62.23, samples=2 00:24:39.626 lat (msec) : 250=74.45%, 500=25.55% 00:24:39.626 cpu : usr=6.13%, sys=1.82%, ctx=306, majf=0, minf=3 00:24:39.626 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.8% 00:24:39.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.626 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:24:39.626 issued rwts: total=325,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.626 job1: (groupid=0, jobs=1): err= 0: pid=73587: Mon Jul 22 17:02:41 2024 00:24:39.626 read: IOPS=191, BW=95.6MiB/s (100MB/s)(103MiB/1072msec) 00:24:39.626 slat (usec): min=20, max=21293, avg=2105.29, stdev=3992.67 00:24:39.626 clat (msec): min=71, max=423, avg=272.53, stdev=110.49 00:24:39.626 lat (msec): min=79, max=431, avg=274.63, stdev=111.19 00:24:39.626 clat percentiles (msec): 00:24:39.626 | 1.00th=[ 88], 5.00th=[ 97], 10.00th=[ 116], 20.00th=[ 148], 00:24:39.626 | 30.00th=[ 180], 40.00th=[ 239], 50.00th=[ 292], 60.00th=[ 342], 00:24:39.626 | 70.00th=[ 372], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 409], 00:24:39.626 | 99.00th=[ 418], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:24:39.626 | 99.99th=[ 422] 00:24:39.626 bw ( KiB/s): min=36864, max=116502, per=37.39%, avg=76683.00, stdev=56312.57, samples=2 00:24:39.626 iops : min= 72, max= 227, avg=149.50, stdev=109.60, samples=2 00:24:39.626 write: IOPS=218, BW=109MiB/s (114MB/s)(117MiB/1072msec); 0 zone resets 00:24:39.626 slat (usec): min=153, max=10977, avg=2417.88, stdev=3548.75 00:24:39.626 clat (msec): min=71, max=472, avg=299.80, stdev=109.84 00:24:39.626 lat (msec): min=71, max=472, avg=302.22, stdev=110.35 00:24:39.626 clat percentiles (msec): 00:24:39.626 | 1.00th=[ 81], 5.00th=[ 94], 10.00th=[ 126], 20.00th=[ 199], 00:24:39.626 | 30.00th=[ 247], 40.00th=[ 271], 50.00th=[ 292], 60.00th=[ 351], 00:24:39.626 | 70.00th=[ 388], 80.00th=[ 414], 90.00th=[ 426], 95.00th=[ 443], 00:24:39.626 | 99.00th=[ 464], 99.50th=[ 468], 99.90th=[ 472], 99.95th=[ 472], 00:24:39.626 | 99.99th=[ 472] 00:24:39.626 bw ( KiB/s): min=41984, max=123656, per=34.41%, avg=82820.00, stdev=57750.83, samples=2 00:24:39.626 iops : min= 82, max= 241, avg=161.50, stdev=112.43, samples=2 00:24:39.626 lat (msec) : 100=6.15%, 250=30.30%, 500=63.55% 00:24:39.626 cpu : usr=5.70%, sys=2.15%, ctx=187, majf=0, minf=7 00:24:39.626 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.6% 00:24:39.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.626 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:39.626 issued rwts: total=205,234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.626 00:24:39.626 Run status group 0 (all jobs): 00:24:39.626 READ: bw=200MiB/s (210MB/s), 95.6MiB/s-123MiB/s (100MB/s-129MB/s), io=265MiB (278MB), run=1072-1323msec 00:24:39.626 WRITE: bw=235MiB/s (246MB/s), 109MiB/s-154MiB/s (114MB/s-161MB/s), io=252MiB (264MB), run=879-1072msec 00:24:39.626 00:24:39.626 Disk stats (read/write): 00:24:39.626 sda: ios=334/270, merge=0/0, ticks=21022/27331, in_queue=48354, util=85.52% 00:24:39.626 sdb: ios=164/141, merge=0/0, ticks=17220/24132, in_queue=41352, util=70.02% 00:24:39.626 17:02:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:24:39.626 [global] 00:24:39.626 thread=1 00:24:39.626 invalidate=1 00:24:39.626 rw=read 00:24:39.626 time_based=1 00:24:39.626 runtime=1 00:24:39.626 ioengine=libaio 00:24:39.626 direct=1 00:24:39.627 bs=1048576 00:24:39.627 iodepth=1024 00:24:39.627 norandommap=1 00:24:39.627 numjobs=4 00:24:39.627 00:24:39.627 [job0] 00:24:39.627 filename=/dev/sda 00:24:39.627 [job1] 00:24:39.627 filename=/dev/sdb 00:24:39.627 queue_depth set to 113 (sda) 00:24:39.627 queue_depth set to 113 (sdb) 00:24:39.885 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:24:39.885 ... 00:24:39.885 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:24:39.885 ... 00:24:39.885 fio-3.35 00:24:39.885 Starting 8 threads 00:24:52.109 00:24:52.109 job0: (groupid=0, jobs=1): err= 0: pid=73659: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=0, BW=450KiB/s (460kB/s)(5120KiB/11387msec) 00:24:52.109 slat (usec): min=1721, max=2046.3k, avg=412294.60, stdev=913430.55 00:24:52.109 clat (msec): min=9325, max=11383, avg=10967.20, stdev=917.83 00:24:52.109 lat (msec): min=11371, max=11386, avg=11379.50, stdev= 6.66 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[ 9329], 10.00th=[ 9329], 20.00th=[ 9329], 00:24:52.109 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11342], 00:24:52.109 | 70.00th=[11342], 80.00th=[11342], 90.00th=[11342], 95.00th=[11342], 00:24:52.109 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:24:52.109 | 99.99th=[11342] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.03%, ctx=12, majf=0, minf=1281 00:24:52.109 IO depths : 1=20.0%, 2=40.0%, 4=40.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job0: (groupid=0, jobs=1): err= 0: pid=73660: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=0, BW=717KiB/s (735kB/s)(8192KiB/11418msec) 00:24:52.109 slat (usec): min=560, max=3901.3k, avg=488989.35, stdev=1378796.25 00:24:52.109 clat (msec): min=7505, max=11416, avg=10921.73, stdev=1380.27 00:24:52.109 lat (msec): min=11407, max=11417, avg=11410.72, stdev= 3.96 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 7483], 5.00th=[ 7483], 10.00th=[ 7483], 20.00th=[11342], 00:24:52.109 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11476], 00:24:52.109 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:24:52.109 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:24:52.109 | 99.99th=[11476] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.04%, ctx=17, majf=0, minf=2049 00:24:52.109 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job0: (groupid=0, jobs=1): err= 0: pid=73661: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=1, BW=1613KiB/s (1652kB/s)(18.0MiB/11428msec) 00:24:52.109 slat (usec): min=505, max=3900.7k, avg=217886.25, stdev=919107.26 00:24:52.109 clat (msec): min=7505, max=11426, avg=11199.49, stdev=921.94 00:24:52.109 lat (msec): min=11406, max=11427, avg=11417.38, stdev= 7.26 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 7483], 5.00th=[ 7483], 10.00th=[11342], 20.00th=[11476], 00:24:52.109 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:24:52.109 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:24:52.109 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:24:52.109 | 99.99th=[11476] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.10%, ctx=27, majf=0, minf=4609 00:24:52.109 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job0: (groupid=0, jobs=1): err= 0: pid=73662: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=2, BW=2596KiB/s (2658kB/s)(29.0MiB/11439msec) 00:24:52.109 slat (usec): min=455, max=2048.5k, avg=71591.67, stdev=380206.94 00:24:52.109 clat (msec): min=9361, max=11434, avg=11353.45, stdev=383.09 00:24:52.109 lat (msec): min=11410, max=11438, avg=11425.04, stdev= 7.85 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[11476], 10.00th=[11476], 20.00th=[11476], 00:24:52.109 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:24:52.109 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:24:52.109 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:24:52.109 | 99.99th=[11476] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.14%, ctx=39, majf=0, minf=7425 00:24:52.109 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job1: (groupid=0, jobs=1): err= 0: pid=73663: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=2, BW=2511KiB/s (2572kB/s)(28.0MiB/11417msec) 00:24:52.109 slat (usec): min=474, max=2049.3k, avg=74443.20, stdev=387035.33 00:24:52.109 clat (msec): min=9331, max=11415, avg=11326.41, stdev=391.03 00:24:52.109 lat (msec): min=11381, max=11416, avg=11400.86, stdev=11.08 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[11342], 10.00th=[11342], 20.00th=[11342], 00:24:52.109 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11342], 00:24:52.109 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:24:52.109 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:24:52.109 | 99.99th=[11476] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.14%, ctx=39, majf=0, minf=7169 00:24:52.109 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job1: (groupid=0, jobs=1): err= 0: pid=73664: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=2, BW=2774KiB/s (2841kB/s)(31.0MiB/11443msec) 00:24:52.109 slat (usec): min=451, max=2049.5k, avg=67073.03, stdev=367916.50 00:24:52.109 clat (msec): min=9363, max=11441, avg=11357.62, stdev=370.27 00:24:52.109 lat (msec): min=11412, max=11442, avg=11424.69, stdev= 9.87 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[11476], 10.00th=[11476], 20.00th=[11476], 00:24:52.109 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11476], 60.00th=[11476], 00:24:52.109 | 70.00th=[11476], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:24:52.109 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:24:52.109 | 99.99th=[11476] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.15%, ctx=44, majf=0, minf=7937 00:24:52.109 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job1: (groupid=0, jobs=1): err= 0: pid=73665: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=1, BW=1796KiB/s (1839kB/s)(20.0MiB/11403msec) 00:24:52.109 slat (usec): min=478, max=2049.6k, avg=104058.37, stdev=457944.78 00:24:52.109 clat (msec): min=9321, max=11400, avg=11283.81, stdev=462.01 00:24:52.109 lat (msec): min=11370, max=11402, avg=11387.87, stdev= 9.63 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[ 9329], 10.00th=[11342], 20.00th=[11342], 00:24:52.109 | 30.00th=[11342], 40.00th=[11342], 50.00th=[11342], 60.00th=[11342], 00:24:52.109 | 70.00th=[11342], 80.00th=[11342], 90.00th=[11342], 95.00th=[11342], 00:24:52.109 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:24:52.109 | 99.99th=[11342] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.11%, ctx=27, majf=0, minf=5121 00:24:52.109 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 job1: (groupid=0, jobs=1): err= 0: pid=73666: Mon Jul 22 17:02:52 2024 00:24:52.109 read: IOPS=0, BW=90.1KiB/s (92.3kB/s)(1024KiB/11361msec) 00:24:52.109 slat (nsec): min=2045.7M, max=2045.7M, avg=2045728776.00, stdev= 0.00 00:24:52.109 clat (nsec): min=9314.5M, max=9314.5M, avg=9314547683.00, stdev= 0.00 00:24:52.109 lat (nsec): min=11360M, max=11360M, avg=11360276459.00, stdev= 0.00 00:24:52.109 clat percentiles (msec): 00:24:52.109 | 1.00th=[ 9329], 5.00th=[ 9329], 10.00th=[ 9329], 20.00th=[ 9329], 00:24:52.109 | 30.00th=[ 9329], 40.00th=[ 9329], 50.00th=[ 9329], 60.00th=[ 9329], 00:24:52.109 | 70.00th=[ 9329], 80.00th=[ 9329], 90.00th=[ 9329], 95.00th=[ 9329], 00:24:52.109 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:24:52.109 | 99.99th=[ 9329] 00:24:52.109 lat (msec) : >=2000=100.00% 00:24:52.109 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=257 00:24:52.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.109 issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.109 latency : target=0, window=0, percentile=100.00%, depth=1024 00:24:52.109 00:24:52.109 Run status group 0 (all jobs): 00:24:52.109 READ: bw=12.2MiB/s (12.8MB/s), 90.1KiB/s-2774KiB/s (92.3kB/s-2841kB/s), io=140MiB (147MB), run=11361-11443msec 00:24:52.109 00:24:52.109 Disk stats (read/write): 00:24:52.109 sda: ios=36/0, merge=0/0, ticks=265488/0, in_queue=265488, util=99.07% 00:24:52.109 sdb: ios=58/0, merge=0/0, ticks=281104/0, in_queue=281104, util=99.25% 00:24:52.110 17:02:52 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:24:52.110 17:02:52 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:24:52.110 [global] 00:24:52.110 thread=1 00:24:52.110 invalidate=1 00:24:52.110 rw=write 00:24:52.110 time_based=1 00:24:52.110 runtime=300 00:24:52.110 ioengine=libaio 00:24:52.110 direct=1 00:24:52.110 bs=4096 00:24:52.110 iodepth=1 00:24:52.110 norandommap=0 00:24:52.110 numjobs=1 00:24:52.110 00:24:52.110 verify_dump=1 00:24:52.110 verify_backlog=512 00:24:52.110 verify_state_save=0 00:24:52.110 do_verify=1 00:24:52.110 verify=crc32c-intel 00:24:52.110 [job0] 00:24:52.110 filename=/dev/sda 00:24:52.110 [job1] 00:24:52.110 filename=/dev/sdb 00:24:52.110 queue_depth set to 113 (sda) 00:24:52.110 queue_depth set to 113 (sdb) 00:24:52.110 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:52.110 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:52.110 fio-3.35 00:24:52.110 Starting 2 threads 00:24:52.110 [2024-07-22 17:02:53.035088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:52.110 [2024-07-22 17:02:53.039304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:04.380 [2024-07-22 17:03:04.444053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:16.603 [2024-07-22 17:03:16.437902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:28.802 [2024-07-22 17:03:28.434640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:41.019 [2024-07-22 17:03:41.229563] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:53.311 [2024-07-22 17:03:54.280776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:08.185 [2024-07-22 17:04:07.162842] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:18.154 [2024-07-22 17:04:19.656484] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:33.025 [2024-07-22 17:04:32.309839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:33.025 [2024-07-22 17:04:34.006108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:45.234 [2024-07-22 17:04:45.431264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:57.441 [2024-07-22 17:04:57.797803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:27:09.641 [2024-07-22 17:05:10.561472] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:27:21.848 [2024-07-22 17:05:22.441070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:27:34.055 [2024-07-22 17:05:34.054802] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:27:46.270 [2024-07-22 17:05:46.674551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:27:58.503 [2024-07-22 17:05:59.694528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:13.387 [2024-07-22 17:06:12.520628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:15.928 [2024-07-22 17:06:17.052784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:24.037 [2024-07-22 17:06:25.254530] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:36.241 [2024-07-22 17:06:37.707599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:48.516 [2024-07-22 17:06:49.321681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:00.744 [2024-07-22 17:07:02.178753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:15.655 [2024-07-22 17:07:14.771931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:27.855 [2024-07-22 17:07:27.259643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:40.103 [2024-07-22 17:07:40.228910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:52.300 [2024-07-22 17:07:53.082301] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:52.300 [2024-07-22 17:07:53.152721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:52.300 [2024-07-22 17:07:53.156221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:52.300 00:29:52.300 job0: (groupid=0, jobs=1): err= 0: pid=73818: Mon Jul 22 17:07:53 2024 00:29:52.300 read: IOPS=2621, BW=10.2MiB/s (10.7MB/s)(3072MiB/299998msec) 00:29:52.300 slat (usec): min=2, max=650, avg= 8.51, stdev= 5.46 00:29:52.300 clat (nsec): min=1803, max=3178.1k, avg=178064.78, stdev=32176.75 00:29:52.300 lat (usec): min=104, max=3193, avg=186.57, stdev=32.48 00:29:52.300 clat percentiles (usec): 00:29:52.300 | 1.00th=[ 117], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 155], 00:29:52.300 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:29:52.300 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 231], 00:29:52.300 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 371], 99.95th=[ 433], 00:29:52.300 | 99.99th=[ 717] 00:29:52.300 write: IOPS=2622, BW=10.2MiB/s (10.7MB/s)(3073MiB/299998msec); 0 zone resets 00:29:52.300 slat (usec): min=4, max=726, avg=11.50, stdev= 8.64 00:29:52.300 clat (nsec): min=1360, max=3838.3k, avg=179698.70, stdev=45656.84 00:29:52.300 lat (usec): min=104, max=3861, avg=191.20, stdev=45.56 00:29:52.301 clat percentiles (usec): 00:29:52.301 | 1.00th=[ 89], 5.00th=[ 116], 10.00th=[ 125], 20.00th=[ 147], 00:29:52.301 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 190], 00:29:52.301 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 249], 00:29:52.301 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 400], 99.95th=[ 469], 00:29:52.301 | 99.99th=[ 791] 00:29:52.301 bw ( KiB/s): min= 7184, max=12312, per=50.50%, avg=10502.71, stdev=1177.69, samples=599 00:29:52.301 iops : min= 1796, max= 3078, avg=2625.56, stdev=294.42, samples=599 00:29:52.301 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:29:52.301 lat (usec) : 100=0.72%, 250=95.84%, 500=3.39%, 750=0.02%, 1000=0.01% 00:29:52.301 lat (msec) : 2=0.01%, 4=0.01% 00:29:52.301 cpu : usr=2.77%, sys=5.19%, ctx=1640851, majf=0, minf=3 00:29:52.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.301 issued rwts: total=786432,786707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:52.301 job1: (groupid=0, jobs=1): err= 0: pid=73822: Mon Jul 22 17:07:53 2024 00:29:52.301 read: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(3018MiB/300000msec) 00:29:52.301 slat (usec): min=2, max=569, avg= 7.67, stdev= 5.39 00:29:52.301 clat (nsec): min=1250, max=3985.4k, avg=176819.96, stdev=35867.94 00:29:52.301 lat (usec): min=85, max=3998, avg=184.49, stdev=36.83 00:29:52.301 clat percentiles (usec): 00:29:52.301 | 1.00th=[ 130], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:29:52.301 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:29:52.301 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 239], 00:29:52.301 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 412], 99.95th=[ 474], 00:29:52.301 | 99.99th=[ 799] 00:29:52.301 write: IOPS=2576, BW=10.1MiB/s (10.6MB/s)(3020MiB/300000msec); 0 zone resets 00:29:52.301 slat (usec): min=3, max=1243, avg=11.17, stdev= 8.37 00:29:52.301 clat (nsec): min=1429, max=3563.2k, avg=189048.54, stdev=54900.32 00:29:52.301 lat (usec): min=97, max=3686, avg=200.22, stdev=55.12 00:29:52.301 clat percentiles (usec): 00:29:52.301 | 1.00th=[ 84], 5.00th=[ 109], 10.00th=[ 119], 20.00th=[ 153], 00:29:52.301 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 194], 00:29:52.301 | 70.00th=[ 210], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 281], 00:29:52.301 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 474], 00:29:52.301 | 99.99th=[ 791] 00:29:52.301 bw ( KiB/s): min= 7024, max=12312, per=49.61%, avg=10317.98, stdev=1233.79, samples=599 00:29:52.301 iops : min= 1756, max= 3078, avg=2579.39, stdev=308.45, samples=599 00:29:52.301 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:29:52.301 lat (usec) : 100=1.10%, 250=89.75%, 500=9.10%, 750=0.03%, 1000=0.01% 00:29:52.301 lat (msec) : 2=0.01%, 4=0.01% 00:29:52.301 cpu : usr=2.74%, sys=4.78%, ctx=1617998, majf=0, minf=6 00:29:52.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.301 issued rwts: total=772608,773059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:52.301 00:29:52.301 Run status group 0 (all jobs): 00:29:52.301 READ: bw=20.3MiB/s (21.3MB/s), 10.1MiB/s-10.2MiB/s (10.5MB/s-10.7MB/s), io=6090MiB (6386MB), run=299998-300000msec 00:29:52.301 WRITE: bw=20.3MiB/s (21.3MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=6093MiB (6389MB), run=299998-300000msec 00:29:52.301 00:29:52.301 Disk stats (read/write): 00:29:52.301 sda: ios=787363/786432, merge=0/0, ticks=133829/139467, in_queue=273295, util=100.00% 00:29:52.301 sdb: ios=772709/772608, merge=0/0, ticks=128532/143950, in_queue=272483, util=100.00% 00:29:52.301 17:07:53 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=77051 00:29:52.301 17:07:53 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:29:52.301 17:07:53 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:29:52.301 [global] 00:29:52.301 thread=1 00:29:52.301 invalidate=1 00:29:52.301 rw=rw 00:29:52.301 time_based=1 00:29:52.301 runtime=10 00:29:52.301 ioengine=libaio 00:29:52.301 direct=1 00:29:52.301 bs=1048576 00:29:52.301 iodepth=128 00:29:52.301 norandommap=1 00:29:52.301 numjobs=1 00:29:52.301 00:29:52.301 [job0] 00:29:52.301 filename=/dev/sda 00:29:52.301 [job1] 00:29:52.301 filename=/dev/sdb 00:29:52.301 queue_depth set to 113 (sda) 00:29:52.301 queue_depth set to 113 (sdb) 00:29:52.301 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:52.301 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:52.301 fio-3.35 00:29:52.301 Starting 2 threads 00:29:52.301 [2024-07-22 17:07:53.365732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:52.301 [2024-07-22 17:07:53.369609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:54.830 17:07:56 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:55.089 [2024-07-22 17:07:56.474166] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:29:55.089 [2024-07-22 17:07:56.475074] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.477378] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.479077] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.480949] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.482755] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.484976] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.490445] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.493060] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.494839] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.496540] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.498485] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.500400] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.502075] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.508469] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.510147] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.511846] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6a 00:29:55.089 [2024-07-22 17:07:56.513838] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.515808] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.517504] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.519479] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.521411] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.523083] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.524740] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.524909] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.525016] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.525126] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.525231] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.525366] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.525477] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.533558] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.533712] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 17:07:56 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:29:55.089 17:07:56 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:55.089 [2024-07-22 17:07:56.557413] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6b 00:29:55.089 [2024-07-22 17:07:56.557560] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.557675] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.557780] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.563452] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.565308] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.567042] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.568675] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.570493] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.572162] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.573927] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.575727] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.577056] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.579138] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.580677] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.582599] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.090 [2024-07-22 17:07:56.584171] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=a6c 00:29:55.656 17:07:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:29:55.656 17:07:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:55.656 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:29:55.656 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:29:55.913 17:07:57 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=55574528, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=56623104, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=57671680, buflen=1048576 00:29:56.172 fio: io_u error on file /dev/sda: Input/output error: write offset=58720256, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=59768832, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=60817408, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=61865984, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=62914560, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=35651584, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=63963136, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=65011712, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=36700160, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=66060288, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=37748736, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=38797312, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=67108864, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=39845888, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=68157440, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=69206016, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=70254592, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=71303168, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=72351744, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=40894464, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=41943040, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=42991616, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=73400320, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=74448896, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=75497472, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=44040192, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=76546048, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=45088768, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=46137344, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=77594624, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=47185920, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=78643200, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=79691776, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=48234496, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=80740352, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=105906176, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=81788928, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=68157440, buflen=1048576 00:29:56.173 fio: pid=77080, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=82837504, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=69206016, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=83886080, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=84934656, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=49283072, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=70254592, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=71303168, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=85983232, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=106954752, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=87031808, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=50331648, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=108003328, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=72351744, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=51380224, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=73400320, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=109051904, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=88080384, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=52428800, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=110100480, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=89128960, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=74448896, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=53477376, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=90177536, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=54525952, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=75497472, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=91226112, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=92274688, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=76546048, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=77594624, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=93323264, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=111149056, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=112197632, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=78643200, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=94371840, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=55574528, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=56623104, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=79691776, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=57671680, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=58720256, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=80740352, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=59768832, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=81788928, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=95420416, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=82837504, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=96468992, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=60817408, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=97517568, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=83886080, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=98566144, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=84934656, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=99614720, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=100663296, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=61865984, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=62914560, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=101711872, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=85983232, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=102760448, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=87031808, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=88080384, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=63963136, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=103809024, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=104857600, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=89128960, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=65011712, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=66060288, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=90177536, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=67108864, buflen=1048576 00:29:56.173 fio: io_u error on file /dev/sda: Input/output error: read offset=91226112, buflen=1048576 00:29:56.174 [2024-07-22 17:07:57.662908] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:29:56.174 [2024-07-22 17:07:57.665108] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b03 00:29:56.174 [2024-07-22 17:07:57.666658] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b03 00:29:56.174 [2024-07-22 17:07:57.667799] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b03 00:29:56.174 [2024-07-22 17:07:57.669438] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b03 00:29:59.462 [2024-07-22 17:08:00.453930] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.455433] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.457060] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.458237] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.459711] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.460893] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.462364] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.463642] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.464854] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.466571] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.467799] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.469342] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.470513] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.471736] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.473197] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.474589] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b04 00:29:59.462 [2024-07-22 17:08:00.475791] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.477335] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:29:59.462 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 77051 00:29:59.462 [2024-07-22 17:08:00.478445] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.479897] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.481077] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.482571] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.483824] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.485460] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.486622] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.488321] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.489568] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.490844] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.492541] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.493855] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.495572] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.496781] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b05 00:29:59.462 [2024-07-22 17:08:00.498416] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.499588] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.500822] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.502344] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.503535] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.504741] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.506366] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.507494] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.508734] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.510366] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.511534] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.512771] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.514539] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.515812] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.517310] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 [2024-07-22 17:08:00.518438] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=b06 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=373293056, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=377487360, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=378535936, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=379584512, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=380633088, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=381681664, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=382730240, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=383778816, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=374341632, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=375390208, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=376438784, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=384827392, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: read offset=344981504, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=385875968, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: read offset=346030080, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=386924544, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=387973120, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: read offset=347078656, buflen=1048576 00:29:59.462 fio: io_u error on file /dev/sdb: Input/output error: write offset=389021696, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=348127232, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=390070272, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=391118848, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=349175808, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=392167424, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=350224384, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=393216000, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=394264576, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=395313152, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=396361728, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=397410304, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=398458880, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=351272960, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=352321536, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=399507456, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=353370112, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=354418688, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=355467264, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=356515840, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=400556032, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=357564416, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=358612992, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=401604608, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=359661568, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=402653184, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=360710144, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=403701760, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=404750336, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=405798912, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=406847488, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=407896064, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=408944640, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=361758720, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=362807296, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=363855872, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=364904448, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=365953024, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=409993216, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=411041792, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=412090368, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=413138944, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=414187520, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=415236096, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=367001600, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=368050176, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=416284672, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=417333248, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=418381824, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=419430400, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=420478976, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=369098752, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=421527552, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=370147328, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=422576128, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=371195904, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=423624704, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=424673280, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=425721856, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=372244480, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=373293056, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=374341632, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=375390208, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=426770432, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=427819008, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=428867584, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=429916160, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=376438784, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=430964736, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=432013312, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=377487360, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=378535936, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=433061888, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=379584512, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=434110464, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=380633088, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=381681664, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=435159040, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=436207616, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=382730240, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=437256192, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=383778816, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=384827392, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=385875968, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=438304768, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=386924544, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=439353344, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=440401920, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=387973120, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=441450496, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=442499072, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=443547648, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=444596224, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=389021696, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=445644800, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=446693376, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=447741952, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=448790528, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=390070272, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=391118848, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=392167424, buflen=1048576 00:29:59.463 fio: pid=77081, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=449839104, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=450887680, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=451936256, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=393216000, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=452984832, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=454033408, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=455081984, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: write offset=456130560, buflen=1048576 00:29:59.463 fio: io_u error on file /dev/sdb: Input/output error: read offset=394264576, buflen=1048576 00:29:59.463 00:29:59.463 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=77080: Mon Jul 22 17:08:00 2024 00:29:59.463 read: IOPS=118, BW=105MiB/s (110MB/s)(418MiB/3972msec) 00:29:59.463 slat (usec): min=30, max=207081, avg=4304.61, stdev=13222.02 00:29:59.463 clat (msec): min=180, max=536, avg=398.94, stdev=64.01 00:29:59.463 lat (msec): min=180, max=554, avg=402.53, stdev=64.80 00:29:59.463 clat percentiles (msec): 00:29:59.463 | 1.00th=[ 182], 5.00th=[ 251], 10.00th=[ 330], 20.00th=[ 359], 00:29:59.463 | 30.00th=[ 384], 40.00th=[ 397], 50.00th=[ 405], 60.00th=[ 418], 00:29:59.463 | 70.00th=[ 430], 80.00th=[ 439], 90.00th=[ 472], 95.00th=[ 493], 00:29:59.463 | 99.00th=[ 535], 99.50th=[ 535], 99.90th=[ 535], 99.95th=[ 535], 00:29:59.463 | 99.99th=[ 535] 00:29:59.463 bw ( KiB/s): min=79872, max=167600, per=100.00%, avg=122206.43, stdev=28593.66, samples=7 00:29:59.463 iops : min= 78, max= 163, avg=119.14, stdev=27.67, samples=7 00:29:59.463 write: IOPS=126, BW=108MiB/s (113MB/s)(429MiB/3972msec); 0 zone resets 00:29:59.463 slat (usec): min=46, max=82085, avg=2866.43, stdev=6908.04 00:29:59.463 clat (msec): min=275, max=760, avg=467.93, stdev=59.53 00:29:59.463 lat (msec): min=275, max=780, avg=470.70, stdev=60.33 00:29:59.463 clat percentiles (msec): 00:29:59.463 | 1.00th=[ 300], 5.00th=[ 330], 10.00th=[ 409], 20.00th=[ 435], 00:29:59.463 | 30.00th=[ 451], 40.00th=[ 464], 50.00th=[ 472], 60.00th=[ 485], 00:29:59.463 | 70.00th=[ 493], 80.00th=[ 506], 90.00th=[ 531], 95.00th=[ 550], 00:29:59.463 | 99.00th=[ 592], 99.50th=[ 617], 99.90th=[ 760], 99.95th=[ 760], 00:29:59.463 | 99.99th=[ 760] 00:29:59.463 bw ( KiB/s): min=38912, max=182272, per=100.00%, avg=125430.14, stdev=45162.51, samples=7 00:29:59.463 iops : min= 38, max= 178, avg=122.29, stdev=44.00, samples=7 00:29:59.463 lat (msec) : 250=1.54%, 500=72.41%, 750=12.82%, 1000=0.10% 00:29:59.463 cpu : usr=1.41%, sys=1.79%, ctx=242, majf=0, minf=1 00:29:59.463 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:29:59.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.463 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.463 issued rwts: total=472,503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.463 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=77081: Mon Jul 22 17:08:00 2024 00:29:59.463 read: IOPS=54, BW=47.4MiB/s (49.7MB/s)(329MiB/6945msec) 00:29:59.463 slat (usec): min=33, max=181116, avg=4597.81, stdev=14244.53 00:29:59.463 clat (msec): min=386, max=3299, avg=920.72, stdev=715.89 00:29:59.463 lat (msec): min=386, max=3299, avg=925.86, stdev=717.55 00:29:59.463 clat percentiles (msec): 00:29:59.463 | 1.00th=[ 397], 5.00th=[ 422], 10.00th=[ 439], 20.00th=[ 481], 00:29:59.463 | 30.00th=[ 535], 40.00th=[ 667], 50.00th=[ 760], 60.00th=[ 835], 00:29:59.463 | 70.00th=[ 885], 80.00th=[ 936], 90.00th=[ 1133], 95.00th=[ 3037], 00:29:59.463 | 99.00th=[ 3104], 99.50th=[ 3104], 99.90th=[ 3306], 99.95th=[ 3306], 00:29:59.463 | 99.99th=[ 3306] 00:29:59.464 bw ( KiB/s): min=24576, max=155648, per=69.27%, avg=76295.62, stdev=44301.23, samples=8 00:29:59.464 iops : min= 24, max= 152, avg=74.25, stdev=43.40, samples=8 00:29:59.464 write: IOPS=62, BW=51.3MiB/s (53.7MB/s)(356MiB/6945msec); 0 zone resets 00:29:59.464 slat (usec): min=54, max=2792.7k, avg=11936.72, stdev=134071.25 00:29:59.464 clat (msec): min=432, max=3327, avg=999.30, stdev=700.51 00:29:59.464 lat (msec): min=447, max=3332, avg=1005.56, stdev=699.45 00:29:59.464 clat percentiles (msec): 00:29:59.464 | 1.00th=[ 451], 5.00th=[ 489], 10.00th=[ 542], 20.00th=[ 600], 00:29:59.464 | 30.00th=[ 684], 40.00th=[ 768], 50.00th=[ 852], 60.00th=[ 919], 00:29:59.464 | 70.00th=[ 944], 80.00th=[ 986], 90.00th=[ 1116], 95.00th=[ 3104], 00:29:59.464 | 99.00th=[ 3306], 99.50th=[ 3339], 99.90th=[ 3339], 99.95th=[ 3339], 00:29:59.464 | 99.99th=[ 3339] 00:29:59.464 bw ( KiB/s): min=40878, max=157696, per=71.88%, avg=83193.38, stdev=37340.07, samples=8 00:29:59.464 iops : min= 39, max= 154, avg=81.00, stdev=36.59, samples=8 00:29:59.464 lat (msec) : 500=12.30%, 750=23.49%, 1000=35.30%, 2000=5.54%, >=2000=7.63% 00:29:59.464 cpu : usr=0.71%, sys=0.89%, ctx=294, majf=0, minf=1 00:29:59.464 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:29:59.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.464 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.464 issued rwts: total=377,436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.464 00:29:59.464 Run status group 0 (all jobs): 00:29:59.464 READ: bw=108MiB/s (113MB/s), 47.4MiB/s-105MiB/s (49.7MB/s-110MB/s), io=747MiB (783MB), run=3972-6945msec 00:29:59.464 WRITE: bw=113MiB/s (119MB/s), 51.3MiB/s-108MiB/s (53.7MB/s-113MB/s), io=785MiB (823MB), run=3972-6945msec 00:29:59.464 00:29:59.464 Disk stats (read/write): 00:29:59.464 sda: ios=507/490, merge=0/0, ticks=72042/110222, in_queue=182264, util=88.24% 00:29:59.464 sdb: ios=377/364, merge=0/0, ticks=92230/130153, in_queue=222382, util=88.96% 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:29:59.464 iscsi hotplug test: fio failed as expected 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:29:59.464 Cleaning up iSCSI connection 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:29:59.464 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:29:59.464 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:29:59.464 17:08:00 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 73290 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 73290 ']' 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 73290 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73290 00:29:59.464 killing process with pid 73290 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73290' 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 73290 00:29:59.464 17:08:01 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 73290 00:30:02.000 17:08:03 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:30:02.000 17:08:03 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:02.000 00:30:02.000 real 5m33.391s 00:30:02.000 user 3m43.918s 00:30:02.000 sys 1m53.167s 00:30:02.000 17:08:03 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.000 17:08:03 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:30:02.000 ************************************ 00:30:02.000 END TEST iscsi_tgt_fio 00:30:02.000 ************************************ 00:30:02.000 17:08:03 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:30:02.000 17:08:03 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:30:02.001 17:08:03 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:02.001 17:08:03 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.001 17:08:03 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:02.001 ************************************ 00:30:02.001 START TEST iscsi_tgt_qos 00:30:02.001 ************************************ 00:30:02.001 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:30:02.258 * Looking for test storage... 00:30:02.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=77285 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:30:02.258 Process pid: 77285 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 77285' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 77285 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 77285 ']' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.258 17:08:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:02.258 [2024-07-22 17:08:03.767618] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:02.258 [2024-07-22 17:08:03.767816] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77285 ] 00:30:02.520 [2024-07-22 17:08:03.936209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.779 [2024-07-22 17:08:04.221559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:30:03.716 iscsi_tgt is listening. Running tests... 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 Malloc0 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.716 17:08:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:30:05.091 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:30:05.091 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:30:05.091 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:05.091 [2024-07-22 17:08:06.392823] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:05.091 "tick_rate": 2200000000, 00:30:05.091 "ticks": 2466822272108, 00:30:05.091 "bdevs": [ 00:30:05.091 { 00:30:05.091 "name": "Malloc0", 00:30:05.091 "bytes_read": 37376, 00:30:05.091 "num_read_ops": 3, 00:30:05.091 "bytes_written": 0, 00:30:05.091 "num_write_ops": 0, 00:30:05.091 "bytes_unmapped": 0, 00:30:05.091 "num_unmap_ops": 0, 00:30:05.091 "bytes_copied": 0, 00:30:05.091 "num_copy_ops": 0, 00:30:05.091 "read_latency_ticks": 1657882, 00:30:05.091 "max_read_latency_ticks": 680589, 00:30:05.091 "min_read_latency_ticks": 435952, 00:30:05.091 "write_latency_ticks": 0, 00:30:05.091 "max_write_latency_ticks": 0, 00:30:05.091 "min_write_latency_ticks": 0, 00:30:05.091 "unmap_latency_ticks": 0, 00:30:05.091 "max_unmap_latency_ticks": 0, 00:30:05.091 "min_unmap_latency_ticks": 0, 00:30:05.091 "copy_latency_ticks": 0, 00:30:05.091 "max_copy_latency_ticks": 0, 00:30:05.091 "min_copy_latency_ticks": 0, 00:30:05.091 "io_error": {} 00:30:05.091 } 00:30:05.091 ] 00:30:05.091 }' 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=3 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=37376 00:30:05.091 17:08:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:05.091 [global] 00:30:05.091 thread=1 00:30:05.091 invalidate=1 00:30:05.091 rw=randread 00:30:05.091 time_based=1 00:30:05.091 runtime=5 00:30:05.091 ioengine=libaio 00:30:05.091 direct=1 00:30:05.091 bs=1024 00:30:05.091 iodepth=128 00:30:05.091 norandommap=1 00:30:05.091 numjobs=1 00:30:05.091 00:30:05.091 [job0] 00:30:05.091 filename=/dev/sda 00:30:05.091 queue_depth set to 113 (sda) 00:30:05.091 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:05.091 fio-3.35 00:30:05.091 Starting 1 thread 00:30:10.359 00:30:10.359 job0: (groupid=0, jobs=1): err= 0: pid=77377: Mon Jul 22 17:08:11 2024 00:30:10.359 read: IOPS=30.4k, BW=29.7MiB/s (31.1MB/s)(149MiB/5004msec) 00:30:10.359 slat (nsec): min=1645, max=3324.6k, avg=30838.31, stdev=98164.82 00:30:10.359 clat (usec): min=1427, max=10896, avg=4176.50, stdev=461.93 00:30:10.359 lat (usec): min=1440, max=10904, avg=4207.34, stdev=455.07 00:30:10.359 clat percentiles (usec): 00:30:10.359 | 1.00th=[ 3523], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3949], 00:30:10.359 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:30:10.359 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4817], 00:30:10.359 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 9110], 99.95th=[ 9503], 00:30:10.359 | 99.99th=[10814] 00:30:10.359 bw ( KiB/s): min=28500, max=31434, per=100.00%, avg=30482.67, stdev=943.68, samples=9 00:30:10.359 iops : min=28500, max=31434, avg=30482.67, stdev=943.68, samples=9 00:30:10.359 lat (msec) : 2=0.03%, 4=30.70%, 10=69.24%, 20=0.03% 00:30:10.359 cpu : usr=7.34%, sys=13.73%, ctx=91534, majf=0, minf=32 00:30:10.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:10.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:10.359 issued rwts: total=152180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:10.359 00:30:10.359 Run status group 0 (all jobs): 00:30:10.359 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=149MiB (156MB), run=5004-5004msec 00:30:10.360 00:30:10.360 Disk stats (read/write): 00:30:10.360 sda: ios=148818/0, merge=0/0, ticks=530097/0, in_queue=530097, util=98.13% 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:10.360 "tick_rate": 2200000000, 00:30:10.360 "ticks": 2478860881222, 00:30:10.360 "bdevs": [ 00:30:10.360 { 00:30:10.360 "name": "Malloc0", 00:30:10.360 "bytes_read": 156942848, 00:30:10.360 "num_read_ops": 152237, 00:30:10.360 "bytes_written": 0, 00:30:10.360 "num_write_ops": 0, 00:30:10.360 "bytes_unmapped": 0, 00:30:10.360 "num_unmap_ops": 0, 00:30:10.360 "bytes_copied": 0, 00:30:10.360 "num_copy_ops": 0, 00:30:10.360 "read_latency_ticks": 55913926790, 00:30:10.360 "max_read_latency_ticks": 4242643, 00:30:10.360 "min_read_latency_ticks": 19923, 00:30:10.360 "write_latency_ticks": 0, 00:30:10.360 "max_write_latency_ticks": 0, 00:30:10.360 "min_write_latency_ticks": 0, 00:30:10.360 "unmap_latency_ticks": 0, 00:30:10.360 "max_unmap_latency_ticks": 0, 00:30:10.360 "min_unmap_latency_ticks": 0, 00:30:10.360 "copy_latency_ticks": 0, 00:30:10.360 "max_copy_latency_ticks": 0, 00:30:10.360 "min_copy_latency_ticks": 0, 00:30:10.360 "io_error": {} 00:30:10.360 } 00:30:10.360 ] 00:30:10.360 }' 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=152237 00:30:10.360 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=156942848 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=30446 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=31381094 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=15223 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=15690547 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=7845273 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=15000 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=14 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=14680064 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=7 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=7340032 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 15000 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.618 17:08:11 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:10.618 "tick_rate": 2200000000, 00:30:10.618 "ticks": 2479151310876, 00:30:10.618 "bdevs": [ 00:30:10.618 { 00:30:10.618 "name": "Malloc0", 00:30:10.618 "bytes_read": 156942848, 00:30:10.618 "num_read_ops": 152237, 00:30:10.618 "bytes_written": 0, 00:30:10.618 "num_write_ops": 0, 00:30:10.618 "bytes_unmapped": 0, 00:30:10.618 "num_unmap_ops": 0, 00:30:10.618 "bytes_copied": 0, 00:30:10.618 "num_copy_ops": 0, 00:30:10.618 "read_latency_ticks": 55913926790, 00:30:10.618 "max_read_latency_ticks": 4242643, 00:30:10.618 "min_read_latency_ticks": 19923, 00:30:10.618 "write_latency_ticks": 0, 00:30:10.618 "max_write_latency_ticks": 0, 00:30:10.618 "min_write_latency_ticks": 0, 00:30:10.618 "unmap_latency_ticks": 0, 00:30:10.618 "max_unmap_latency_ticks": 0, 00:30:10.618 "min_unmap_latency_ticks": 0, 00:30:10.618 "copy_latency_ticks": 0, 00:30:10.618 "max_copy_latency_ticks": 0, 00:30:10.618 "min_copy_latency_ticks": 0, 00:30:10.618 "io_error": {} 00:30:10.618 } 00:30:10.618 ] 00:30:10.618 }' 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=152237 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=156942848 00:30:10.618 17:08:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:10.618 [global] 00:30:10.618 thread=1 00:30:10.618 invalidate=1 00:30:10.618 rw=randread 00:30:10.619 time_based=1 00:30:10.619 runtime=5 00:30:10.619 ioengine=libaio 00:30:10.619 direct=1 00:30:10.619 bs=1024 00:30:10.619 iodepth=128 00:30:10.619 norandommap=1 00:30:10.619 numjobs=1 00:30:10.619 00:30:10.619 [job0] 00:30:10.619 filename=/dev/sda 00:30:10.619 queue_depth set to 113 (sda) 00:30:10.877 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:10.877 fio-3.35 00:30:10.877 Starting 1 thread 00:30:16.160 00:30:16.160 job0: (groupid=0, jobs=1): err= 0: pid=77466: Mon Jul 22 17:08:17 2024 00:30:16.160 read: IOPS=15.0k, BW=14.6MiB/s (15.4MB/s)(73.4MiB/5007msec) 00:30:16.160 slat (nsec): min=1763, max=1344.8k, avg=63646.65, stdev=208909.75 00:30:16.160 clat (usec): min=1179, max=14829, avg=8466.03, stdev=528.18 00:30:16.160 lat (usec): min=1244, max=14840, avg=8529.68, stdev=529.29 00:30:16.160 clat percentiles (usec): 00:30:16.160 | 1.00th=[ 7373], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8094], 00:30:16.160 | 30.00th=[ 8160], 40.00th=[ 8225], 50.00th=[ 8291], 60.00th=[ 8717], 00:30:16.160 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9110], 00:30:16.160 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[ 9896], 99.95th=[11863], 00:30:16.160 | 99.99th=[13960] 00:30:16.160 bw ( KiB/s): min=14972, max=15038, per=100.00%, avg=15016.44, stdev=20.12, samples=9 00:30:16.160 iops : min=14972, max=15038, avg=15016.67, stdev=20.00, samples=9 00:30:16.160 lat (msec) : 2=0.01%, 4=0.07%, 10=99.84%, 20=0.09% 00:30:16.160 cpu : usr=4.91%, sys=10.09%, ctx=42346, majf=0, minf=32 00:30:16.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:16.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.160 issued rwts: total=75114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.160 00:30:16.160 Run status group 0 (all jobs): 00:30:16.160 READ: bw=14.6MiB/s (15.4MB/s), 14.6MiB/s-14.6MiB/s (15.4MB/s-15.4MB/s), io=73.4MiB (76.9MB), run=5007-5007msec 00:30:16.160 00:30:16.160 Disk stats (read/write): 00:30:16.160 sda: ios=73335/0, merge=0/0, ticks=539294/0, in_queue=539294, util=98.13% 00:30:16.160 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:16.161 "tick_rate": 2200000000, 00:30:16.161 "ticks": 2491173652572, 00:30:16.161 "bdevs": [ 00:30:16.161 { 00:30:16.161 "name": "Malloc0", 00:30:16.161 "bytes_read": 233859584, 00:30:16.161 "num_read_ops": 227351, 00:30:16.161 "bytes_written": 0, 00:30:16.161 "num_write_ops": 0, 00:30:16.161 "bytes_unmapped": 0, 00:30:16.161 "num_unmap_ops": 0, 00:30:16.161 "bytes_copied": 0, 00:30:16.161 "num_copy_ops": 0, 00:30:16.161 "read_latency_ticks": 660160043659, 00:30:16.161 "max_read_latency_ticks": 10255508, 00:30:16.161 "min_read_latency_ticks": 19923, 00:30:16.161 "write_latency_ticks": 0, 00:30:16.161 "max_write_latency_ticks": 0, 00:30:16.161 "min_write_latency_ticks": 0, 00:30:16.161 "unmap_latency_ticks": 0, 00:30:16.161 "max_unmap_latency_ticks": 0, 00:30:16.161 "min_unmap_latency_ticks": 0, 00:30:16.161 "copy_latency_ticks": 0, 00:30:16.161 "max_copy_latency_ticks": 0, 00:30:16.161 "min_copy_latency_ticks": 0, 00:30:16.161 "io_error": {} 00:30:16.161 } 00:30:16.161 ] 00:30:16.161 }' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=227351 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=233859584 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=15022 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=15383347 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 15022 15000 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=15022 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=15000 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:16.161 "tick_rate": 2200000000, 00:30:16.161 "ticks": 2491499366614, 00:30:16.161 "bdevs": [ 00:30:16.161 { 00:30:16.161 "name": "Malloc0", 00:30:16.161 "bytes_read": 233859584, 00:30:16.161 "num_read_ops": 227351, 00:30:16.161 "bytes_written": 0, 00:30:16.161 "num_write_ops": 0, 00:30:16.161 "bytes_unmapped": 0, 00:30:16.161 "num_unmap_ops": 0, 00:30:16.161 "bytes_copied": 0, 00:30:16.161 "num_copy_ops": 0, 00:30:16.161 "read_latency_ticks": 660160043659, 00:30:16.161 "max_read_latency_ticks": 10255508, 00:30:16.161 "min_read_latency_ticks": 19923, 00:30:16.161 "write_latency_ticks": 0, 00:30:16.161 "max_write_latency_ticks": 0, 00:30:16.161 "min_write_latency_ticks": 0, 00:30:16.161 "unmap_latency_ticks": 0, 00:30:16.161 "max_unmap_latency_ticks": 0, 00:30:16.161 "min_unmap_latency_ticks": 0, 00:30:16.161 "copy_latency_ticks": 0, 00:30:16.161 "max_copy_latency_ticks": 0, 00:30:16.161 "min_copy_latency_ticks": 0, 00:30:16.161 "io_error": {} 00:30:16.161 } 00:30:16.161 ] 00:30:16.161 }' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=227351 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=233859584 00:30:16.161 17:08:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:16.161 [global] 00:30:16.161 thread=1 00:30:16.161 invalidate=1 00:30:16.161 rw=randread 00:30:16.161 time_based=1 00:30:16.161 runtime=5 00:30:16.161 ioengine=libaio 00:30:16.161 direct=1 00:30:16.161 bs=1024 00:30:16.161 iodepth=128 00:30:16.161 norandommap=1 00:30:16.161 numjobs=1 00:30:16.161 00:30:16.161 [job0] 00:30:16.161 filename=/dev/sda 00:30:16.161 queue_depth set to 113 (sda) 00:30:16.418 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:16.418 fio-3.35 00:30:16.418 Starting 1 thread 00:30:21.684 00:30:21.684 job0: (groupid=0, jobs=1): err= 0: pid=77557: Mon Jul 22 17:08:23 2024 00:30:21.684 read: IOPS=31.1k, BW=30.4MiB/s (31.9MB/s)(152MiB/5004msec) 00:30:21.684 slat (nsec): min=1909, max=701096, avg=30071.22, stdev=92855.09 00:30:21.684 clat (usec): min=1200, max=8182, avg=4083.84, stdev=316.64 00:30:21.684 lat (usec): min=1209, max=8186, avg=4113.91, stdev=305.42 00:30:21.685 clat percentiles (usec): 00:30:21.685 | 1.00th=[ 3458], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3851], 00:30:21.685 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:30:21.685 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:30:21.685 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 5669], 99.95th=[ 5800], 00:30:21.685 | 99.99th=[ 7570] 00:30:21.685 bw ( KiB/s): min=29952, max=32384, per=100.00%, avg=31399.33, stdev=982.12, samples=9 00:30:21.685 iops : min=29952, max=32384, avg=31399.33, stdev=982.12, samples=9 00:30:21.685 lat (msec) : 2=0.01%, 4=42.80%, 10=57.19% 00:30:21.685 cpu : usr=7.47%, sys=14.65%, ctx=94025, majf=0, minf=32 00:30:21.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:21.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.685 issued rwts: total=155653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.685 00:30:21.685 Run status group 0 (all jobs): 00:30:21.685 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=152MiB (159MB), run=5004-5004msec 00:30:21.685 00:30:21.685 Disk stats (read/write): 00:30:21.685 sda: ios=152531/0, merge=0/0, ticks=533645/0, in_queue=533645, util=98.13% 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:21.685 "tick_rate": 2200000000, 00:30:21.685 "ticks": 2503428812102, 00:30:21.685 "bdevs": [ 00:30:21.685 { 00:30:21.685 "name": "Malloc0", 00:30:21.685 "bytes_read": 393248256, 00:30:21.685 "num_read_ops": 383004, 00:30:21.685 "bytes_written": 0, 00:30:21.685 "num_write_ops": 0, 00:30:21.685 "bytes_unmapped": 0, 00:30:21.685 "num_unmap_ops": 0, 00:30:21.685 "bytes_copied": 0, 00:30:21.685 "num_copy_ops": 0, 00:30:21.685 "read_latency_ticks": 715870043227, 00:30:21.685 "max_read_latency_ticks": 10255508, 00:30:21.685 "min_read_latency_ticks": 19923, 00:30:21.685 "write_latency_ticks": 0, 00:30:21.685 "max_write_latency_ticks": 0, 00:30:21.685 "min_write_latency_ticks": 0, 00:30:21.685 "unmap_latency_ticks": 0, 00:30:21.685 "max_unmap_latency_ticks": 0, 00:30:21.685 "min_unmap_latency_ticks": 0, 00:30:21.685 "copy_latency_ticks": 0, 00:30:21.685 "max_copy_latency_ticks": 0, 00:30:21.685 "min_copy_latency_ticks": 0, 00:30:21.685 "io_error": {} 00:30:21.685 } 00:30:21.685 ] 00:30:21.685 }' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=383004 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=393248256 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=31130 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=31877734 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 31130 -gt 15000 ']' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 15000 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:21.685 "tick_rate": 2200000000, 00:30:21.685 "ticks": 2503736560217, 00:30:21.685 "bdevs": [ 00:30:21.685 { 00:30:21.685 "name": "Malloc0", 00:30:21.685 "bytes_read": 393248256, 00:30:21.685 "num_read_ops": 383004, 00:30:21.685 "bytes_written": 0, 00:30:21.685 "num_write_ops": 0, 00:30:21.685 "bytes_unmapped": 0, 00:30:21.685 "num_unmap_ops": 0, 00:30:21.685 "bytes_copied": 0, 00:30:21.685 "num_copy_ops": 0, 00:30:21.685 "read_latency_ticks": 715870043227, 00:30:21.685 "max_read_latency_ticks": 10255508, 00:30:21.685 "min_read_latency_ticks": 19923, 00:30:21.685 "write_latency_ticks": 0, 00:30:21.685 "max_write_latency_ticks": 0, 00:30:21.685 "min_write_latency_ticks": 0, 00:30:21.685 "unmap_latency_ticks": 0, 00:30:21.685 "max_unmap_latency_ticks": 0, 00:30:21.685 "min_unmap_latency_ticks": 0, 00:30:21.685 "copy_latency_ticks": 0, 00:30:21.685 "max_copy_latency_ticks": 0, 00:30:21.685 "min_copy_latency_ticks": 0, 00:30:21.685 "io_error": {} 00:30:21.685 } 00:30:21.685 ] 00:30:21.685 }' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=383004 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=393248256 00:30:21.685 17:08:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:21.943 [global] 00:30:21.943 thread=1 00:30:21.943 invalidate=1 00:30:21.943 rw=randread 00:30:21.943 time_based=1 00:30:21.943 runtime=5 00:30:21.943 ioengine=libaio 00:30:21.943 direct=1 00:30:21.943 bs=1024 00:30:21.943 iodepth=128 00:30:21.943 norandommap=1 00:30:21.943 numjobs=1 00:30:21.943 00:30:21.943 [job0] 00:30:21.943 filename=/dev/sda 00:30:21.943 queue_depth set to 113 (sda) 00:30:21.943 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:21.943 fio-3.35 00:30:21.943 Starting 1 thread 00:30:27.210 00:30:27.210 job0: (groupid=0, jobs=1): err= 0: pid=77641: Mon Jul 22 17:08:28 2024 00:30:27.211 read: IOPS=15.0k, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5008msec) 00:30:27.211 slat (nsec): min=1956, max=1530.9k, avg=63609.78, stdev=208520.13 00:30:27.211 clat (usec): min=1955, max=15736, avg=8468.03, stdev=528.94 00:30:27.211 lat (usec): min=1976, max=15746, avg=8531.63, stdev=530.02 00:30:27.211 clat percentiles (usec): 00:30:27.211 | 1.00th=[ 7373], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8094], 00:30:27.211 | 30.00th=[ 8160], 40.00th=[ 8225], 50.00th=[ 8291], 60.00th=[ 8848], 00:30:27.211 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 9110], 00:30:27.211 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[12911], 00:30:27.211 | 99.99th=[14877] 00:30:27.211 bw ( KiB/s): min=14814, max=15036, per=99.97%, avg=14991.20, stdev=65.06, samples=10 00:30:27.211 iops : min=14814, max=15036, avg=14991.40, stdev=65.14, samples=10 00:30:27.211 lat (msec) : 2=0.01%, 4=0.05%, 10=99.83%, 20=0.11% 00:30:27.211 cpu : usr=5.17%, sys=9.69%, ctx=42745, majf=0, minf=32 00:30:27.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:27.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:27.211 issued rwts: total=75099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:27.211 00:30:27.211 Run status group 0 (all jobs): 00:30:27.211 READ: bw=14.6MiB/s (15.4MB/s), 14.6MiB/s-14.6MiB/s (15.4MB/s-15.4MB/s), io=73.3MiB (76.9MB), run=5008-5008msec 00:30:27.211 00:30:27.211 Disk stats (read/write): 00:30:27.211 sda: ios=73320/0, merge=0/0, ticks=539263/0, in_queue=539263, util=98.13% 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:27.211 "tick_rate": 2200000000, 00:30:27.211 "ticks": 2515680828449, 00:30:27.211 "bdevs": [ 00:30:27.211 { 00:30:27.211 "name": "Malloc0", 00:30:27.211 "bytes_read": 470149632, 00:30:27.211 "num_read_ops": 458103, 00:30:27.211 "bytes_written": 0, 00:30:27.211 "num_write_ops": 0, 00:30:27.211 "bytes_unmapped": 0, 00:30:27.211 "num_unmap_ops": 0, 00:30:27.211 "bytes_copied": 0, 00:30:27.211 "num_copy_ops": 0, 00:30:27.211 "read_latency_ticks": 1321897859377, 00:30:27.211 "max_read_latency_ticks": 10452653, 00:30:27.211 "min_read_latency_ticks": 19923, 00:30:27.211 "write_latency_ticks": 0, 00:30:27.211 "max_write_latency_ticks": 0, 00:30:27.211 "min_write_latency_ticks": 0, 00:30:27.211 "unmap_latency_ticks": 0, 00:30:27.211 "max_unmap_latency_ticks": 0, 00:30:27.211 "min_unmap_latency_ticks": 0, 00:30:27.211 "copy_latency_ticks": 0, 00:30:27.211 "max_copy_latency_ticks": 0, 00:30:27.211 "min_copy_latency_ticks": 0, 00:30:27.211 "io_error": {} 00:30:27.211 } 00:30:27.211 ] 00:30:27.211 }' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=458103 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=470149632 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=15019 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=15380275 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 15019 15000 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=15019 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=15000 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:30:27.211 I/O rate limiting tests successful 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 14 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:27.211 "tick_rate": 2200000000, 00:30:27.211 "ticks": 2516015359566, 00:30:27.211 "bdevs": [ 00:30:27.211 { 00:30:27.211 "name": "Malloc0", 00:30:27.211 "bytes_read": 470149632, 00:30:27.211 "num_read_ops": 458103, 00:30:27.211 "bytes_written": 0, 00:30:27.211 "num_write_ops": 0, 00:30:27.211 "bytes_unmapped": 0, 00:30:27.211 "num_unmap_ops": 0, 00:30:27.211 "bytes_copied": 0, 00:30:27.211 "num_copy_ops": 0, 00:30:27.211 "read_latency_ticks": 1321897859377, 00:30:27.211 "max_read_latency_ticks": 10452653, 00:30:27.211 "min_read_latency_ticks": 19923, 00:30:27.211 "write_latency_ticks": 0, 00:30:27.211 "max_write_latency_ticks": 0, 00:30:27.211 "min_write_latency_ticks": 0, 00:30:27.211 "unmap_latency_ticks": 0, 00:30:27.211 "max_unmap_latency_ticks": 0, 00:30:27.211 "min_unmap_latency_ticks": 0, 00:30:27.211 "copy_latency_ticks": 0, 00:30:27.211 "max_copy_latency_ticks": 0, 00:30:27.211 "min_copy_latency_ticks": 0, 00:30:27.211 "io_error": {} 00:30:27.211 } 00:30:27.211 ] 00:30:27.211 }' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=458103 00:30:27.211 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:27.469 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=470149632 00:30:27.469 17:08:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:27.469 [global] 00:30:27.469 thread=1 00:30:27.469 invalidate=1 00:30:27.469 rw=randread 00:30:27.469 time_based=1 00:30:27.469 runtime=5 00:30:27.469 ioengine=libaio 00:30:27.469 direct=1 00:30:27.469 bs=1024 00:30:27.469 iodepth=128 00:30:27.469 norandommap=1 00:30:27.469 numjobs=1 00:30:27.469 00:30:27.469 [job0] 00:30:27.469 filename=/dev/sda 00:30:27.469 queue_depth set to 113 (sda) 00:30:27.469 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:27.469 fio-3.35 00:30:27.469 Starting 1 thread 00:30:32.737 00:30:32.737 job0: (groupid=0, jobs=1): err= 0: pid=77732: Mon Jul 22 17:08:34 2024 00:30:32.737 read: IOPS=14.3k, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5008msec) 00:30:32.737 slat (usec): min=2, max=3993, avg=66.49, stdev=229.62 00:30:32.737 clat (usec): min=1886, max=16388, avg=8858.68, stdev=552.47 00:30:32.737 lat (usec): min=1977, max=16400, avg=8925.17, stdev=517.92 00:30:32.737 clat percentiles (usec): 00:30:32.737 | 1.00th=[ 7504], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8455], 00:30:32.737 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 8979], 00:30:32.737 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9634], 00:30:32.737 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[11994], 99.95th=[14222], 00:30:32.737 | 99.99th=[16319] 00:30:32.737 bw ( KiB/s): min=14308, max=14394, per=100.00%, avg=14352.00, stdev=25.24, samples=9 00:30:32.737 iops : min=14308, max=14394, avg=14352.00, stdev=25.24, samples=9 00:30:32.737 lat (msec) : 2=0.01%, 4=0.06%, 10=99.41%, 20=0.53% 00:30:32.737 cpu : usr=5.35%, sys=9.59%, ctx=40379, majf=0, minf=32 00:30:32.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:32.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.737 issued rwts: total=71794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.737 00:30:32.737 Run status group 0 (all jobs): 00:30:32.737 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=70.1MiB (73.5MB), run=5008-5008msec 00:30:32.737 00:30:32.737 Disk stats (read/write): 00:30:32.737 sda: ios=70132/0, merge=0/0, ticks=538209/0, in_queue=538209, util=98.11% 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:32.737 "tick_rate": 2200000000, 00:30:32.737 "ticks": 2528058110842, 00:30:32.737 "bdevs": [ 00:30:32.737 { 00:30:32.737 "name": "Malloc0", 00:30:32.737 "bytes_read": 543666688, 00:30:32.737 "num_read_ops": 529897, 00:30:32.737 "bytes_written": 0, 00:30:32.737 "num_write_ops": 0, 00:30:32.737 "bytes_unmapped": 0, 00:30:32.737 "num_unmap_ops": 0, 00:30:32.737 "bytes_copied": 0, 00:30:32.737 "num_copy_ops": 0, 00:30:32.737 "read_latency_ticks": 1911603968238, 00:30:32.737 "max_read_latency_ticks": 11248245, 00:30:32.737 "min_read_latency_ticks": 19923, 00:30:32.737 "write_latency_ticks": 0, 00:30:32.737 "max_write_latency_ticks": 0, 00:30:32.737 "min_write_latency_ticks": 0, 00:30:32.737 "unmap_latency_ticks": 0, 00:30:32.737 "max_unmap_latency_ticks": 0, 00:30:32.737 "min_unmap_latency_ticks": 0, 00:30:32.737 "copy_latency_ticks": 0, 00:30:32.737 "max_copy_latency_ticks": 0, 00:30:32.737 "min_copy_latency_ticks": 0, 00:30:32.737 "io_error": {} 00:30:32.737 } 00:30:32.737 ] 00:30:32.737 }' 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=529897 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=543666688 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=14358 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=14703411 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 14703411 14680064 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=14703411 00:30:32.737 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=14680064 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:32.996 17:08:34 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:32.997 "tick_rate": 2200000000, 00:30:32.997 "ticks": 2528392622624, 00:30:32.997 "bdevs": [ 00:30:32.997 { 00:30:32.997 "name": "Malloc0", 00:30:32.997 "bytes_read": 543666688, 00:30:32.997 "num_read_ops": 529897, 00:30:32.997 "bytes_written": 0, 00:30:32.997 "num_write_ops": 0, 00:30:32.997 "bytes_unmapped": 0, 00:30:32.997 "num_unmap_ops": 0, 00:30:32.997 "bytes_copied": 0, 00:30:32.997 "num_copy_ops": 0, 00:30:32.997 "read_latency_ticks": 1911603968238, 00:30:32.997 "max_read_latency_ticks": 11248245, 00:30:32.997 "min_read_latency_ticks": 19923, 00:30:32.997 "write_latency_ticks": 0, 00:30:32.997 "max_write_latency_ticks": 0, 00:30:32.997 "min_write_latency_ticks": 0, 00:30:32.997 "unmap_latency_ticks": 0, 00:30:32.997 "max_unmap_latency_ticks": 0, 00:30:32.997 "min_unmap_latency_ticks": 0, 00:30:32.997 "copy_latency_ticks": 0, 00:30:32.997 "max_copy_latency_ticks": 0, 00:30:32.997 "min_copy_latency_ticks": 0, 00:30:32.997 "io_error": {} 00:30:32.997 } 00:30:32.997 ] 00:30:32.997 }' 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=529897 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=543666688 00:30:32.997 17:08:34 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:32.997 [global] 00:30:32.997 thread=1 00:30:32.997 invalidate=1 00:30:32.997 rw=randread 00:30:32.997 time_based=1 00:30:32.997 runtime=5 00:30:32.997 ioengine=libaio 00:30:32.997 direct=1 00:30:32.997 bs=1024 00:30:32.997 iodepth=128 00:30:32.997 norandommap=1 00:30:32.997 numjobs=1 00:30:32.997 00:30:32.997 [job0] 00:30:32.997 filename=/dev/sda 00:30:32.997 queue_depth set to 113 (sda) 00:30:33.255 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:33.255 fio-3.35 00:30:33.255 Starting 1 thread 00:30:38.524 00:30:38.524 job0: (groupid=0, jobs=1): err= 0: pid=77820: Mon Jul 22 17:08:39 2024 00:30:38.524 read: IOPS=31.1k, BW=30.4MiB/s (31.8MB/s)(152MiB/5004msec) 00:30:38.524 slat (usec): min=2, max=2900, avg=30.09, stdev=95.90 00:30:38.524 clat (usec): min=2095, max=7224, avg=4083.81, stdev=237.60 00:30:38.524 lat (usec): min=2102, max=7227, avg=4113.90, stdev=219.62 00:30:38.524 clat percentiles (usec): 00:30:38.524 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3916], 00:30:38.524 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:30:38.524 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:30:38.524 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5932], 00:30:38.524 | 99.99th=[ 6718] 00:30:38.524 bw ( KiB/s): min=30754, max=31840, per=99.80%, avg=31036.22, stdev=339.70, samples=9 00:30:38.524 iops : min=30754, max=31840, avg=31036.22, stdev=339.70, samples=9 00:30:38.524 lat (msec) : 4=36.18%, 10=63.82% 00:30:38.524 cpu : usr=7.66%, sys=14.03%, ctx=84319, majf=0, minf=32 00:30:38.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:38.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:38.524 issued rwts: total=155613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:38.524 00:30:38.524 Run status group 0 (all jobs): 00:30:38.524 READ: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=152MiB (159MB), run=5004-5004msec 00:30:38.524 00:30:38.524 Disk stats (read/write): 00:30:38.524 sda: ios=152011/0, merge=0/0, ticks=534758/0, in_queue=534758, util=98.15% 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:38.524 "tick_rate": 2200000000, 00:30:38.524 "ticks": 2540370963668, 00:30:38.524 "bdevs": [ 00:30:38.524 { 00:30:38.524 "name": "Malloc0", 00:30:38.524 "bytes_read": 703014400, 00:30:38.524 "num_read_ops": 685510, 00:30:38.524 "bytes_written": 0, 00:30:38.524 "num_write_ops": 0, 00:30:38.524 "bytes_unmapped": 0, 00:30:38.524 "num_unmap_ops": 0, 00:30:38.524 "bytes_copied": 0, 00:30:38.524 "num_copy_ops": 0, 00:30:38.524 "read_latency_ticks": 1966328988180, 00:30:38.524 "max_read_latency_ticks": 11248245, 00:30:38.524 "min_read_latency_ticks": 19923, 00:30:38.524 "write_latency_ticks": 0, 00:30:38.524 "max_write_latency_ticks": 0, 00:30:38.524 "min_write_latency_ticks": 0, 00:30:38.524 "unmap_latency_ticks": 0, 00:30:38.524 "max_unmap_latency_ticks": 0, 00:30:38.524 "min_unmap_latency_ticks": 0, 00:30:38.524 "copy_latency_ticks": 0, 00:30:38.524 "max_copy_latency_ticks": 0, 00:30:38.524 "min_copy_latency_ticks": 0, 00:30:38.524 "io_error": {} 00:30:38.524 } 00:30:38.524 ] 00:30:38.524 }' 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=685510 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=703014400 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=31122 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=31869542 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 31869542 -gt 14680064 ']' 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 14 --r_mbytes_per_sec 7 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:30:38.524 "tick_rate": 2200000000, 00:30:38.524 "ticks": 2540670139873, 00:30:38.524 "bdevs": [ 00:30:38.524 { 00:30:38.524 "name": "Malloc0", 00:30:38.524 "bytes_read": 703014400, 00:30:38.524 "num_read_ops": 685510, 00:30:38.524 "bytes_written": 0, 00:30:38.524 "num_write_ops": 0, 00:30:38.524 "bytes_unmapped": 0, 00:30:38.524 "num_unmap_ops": 0, 00:30:38.524 "bytes_copied": 0, 00:30:38.524 "num_copy_ops": 0, 00:30:38.524 "read_latency_ticks": 1966328988180, 00:30:38.524 "max_read_latency_ticks": 11248245, 00:30:38.524 "min_read_latency_ticks": 19923, 00:30:38.524 "write_latency_ticks": 0, 00:30:38.524 "max_write_latency_ticks": 0, 00:30:38.524 "min_write_latency_ticks": 0, 00:30:38.524 "unmap_latency_ticks": 0, 00:30:38.524 "max_unmap_latency_ticks": 0, 00:30:38.524 "min_unmap_latency_ticks": 0, 00:30:38.524 "copy_latency_ticks": 0, 00:30:38.524 "max_copy_latency_ticks": 0, 00:30:38.524 "min_copy_latency_ticks": 0, 00:30:38.524 "io_error": {} 00:30:38.524 } 00:30:38.524 ] 00:30:38.524 }' 00:30:38.524 17:08:39 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:30:38.524 17:08:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=685510 00:30:38.524 17:08:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:30:38.524 17:08:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=703014400 00:30:38.524 17:08:40 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:30:38.524 [global] 00:30:38.524 thread=1 00:30:38.524 invalidate=1 00:30:38.524 rw=randread 00:30:38.524 time_based=1 00:30:38.524 runtime=5 00:30:38.524 ioengine=libaio 00:30:38.524 direct=1 00:30:38.524 bs=1024 00:30:38.524 iodepth=128 00:30:38.524 norandommap=1 00:30:38.525 numjobs=1 00:30:38.525 00:30:38.525 [job0] 00:30:38.525 filename=/dev/sda 00:30:38.525 queue_depth set to 113 (sda) 00:30:38.783 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:30:38.783 fio-3.35 00:30:38.783 Starting 1 thread 00:30:44.097 00:30:44.097 job0: (groupid=0, jobs=1): err= 0: pid=77907: Mon Jul 22 17:08:45 2024 00:30:44.097 read: IOPS=7168, BW=7168KiB/s (7340kB/s)(35.1MiB/5015msec) 00:30:44.097 slat (usec): min=2, max=4096, avg=135.06, stdev=369.20 00:30:44.097 clat (usec): min=2362, max=32790, avg=17715.15, stdev=989.31 00:30:44.097 lat (usec): min=2379, max=32795, avg=17850.20, stdev=959.79 00:30:44.097 clat percentiles (usec): 00:30:44.097 | 1.00th=[15795], 5.00th=[16450], 10.00th=[16909], 20.00th=[17171], 00:30:44.097 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17695], 60.00th=[17957], 00:30:44.097 | 70.00th=[18220], 80.00th=[18220], 90.00th=[18482], 95.00th=[19006], 00:30:44.097 | 99.00th=[19268], 99.50th=[19792], 99.90th=[27395], 99.95th=[30802], 00:30:44.097 | 99.99th=[31851] 00:30:44.097 bw ( KiB/s): min= 7100, max= 7184, per=99.94%, avg=7164.20, stdev=26.05, samples=10 00:30:44.097 iops : min= 7100, max= 7184, avg=7164.20, stdev=26.05, samples=10 00:30:44.097 lat (msec) : 4=0.02%, 10=0.16%, 20=99.44%, 50=0.38% 00:30:44.097 cpu : usr=3.29%, sys=6.78%, ctx=19611, majf=0, minf=32 00:30:44.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:44.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:44.097 issued rwts: total=35948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:44.097 00:30:44.097 Run status group 0 (all jobs): 00:30:44.097 READ: bw=7168KiB/s (7340kB/s), 7168KiB/s-7168KiB/s (7340kB/s-7340kB/s), io=35.1MiB (36.8MB), run=5015-5015msec 00:30:44.097 00:30:44.097 Disk stats (read/write): 00:30:44.097 sda: ios=35045/0, merge=0/0, ticks=543173/0, in_queue=543173, util=98.13% 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:30:44.097 "tick_rate": 2200000000, 00:30:44.097 "ticks": 2552648957116, 00:30:44.097 "bdevs": [ 00:30:44.097 { 00:30:44.097 "name": "Malloc0", 00:30:44.097 "bytes_read": 739825152, 00:30:44.097 "num_read_ops": 721458, 00:30:44.097 "bytes_written": 0, 00:30:44.097 "num_write_ops": 0, 00:30:44.097 "bytes_unmapped": 0, 00:30:44.097 "num_unmap_ops": 0, 00:30:44.097 "bytes_copied": 0, 00:30:44.097 "num_copy_ops": 0, 00:30:44.097 "read_latency_ticks": 2618646655858, 00:30:44.097 "max_read_latency_ticks": 25273378, 00:30:44.097 "min_read_latency_ticks": 19923, 00:30:44.097 "write_latency_ticks": 0, 00:30:44.097 "max_write_latency_ticks": 0, 00:30:44.097 "min_write_latency_ticks": 0, 00:30:44.097 "unmap_latency_ticks": 0, 00:30:44.097 "max_unmap_latency_ticks": 0, 00:30:44.097 "min_unmap_latency_ticks": 0, 00:30:44.097 "copy_latency_ticks": 0, 00:30:44.097 "max_copy_latency_ticks": 0, 00:30:44.097 "min_copy_latency_ticks": 0, 00:30:44.097 "io_error": {} 00:30:44.097 } 00:30:44.097 ] 00:30:44.097 }' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=721458 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=739825152 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=7189 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=7362150 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 7362150 7340032 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=7362150 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=7340032 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:30:44.097 I/O bandwidth limiting tests successful 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:30:44.097 Cleaning up iSCSI connection 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:30:44.097 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:30:44.097 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 77285 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 77285 ']' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 77285 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77285 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:44.097 killing process with pid 77285 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77285' 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 77285 00:30:44.097 17:08:45 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 77285 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:47.381 00:30:47.381 real 0m44.868s 00:30:47.381 user 0m40.234s 00:30:47.381 sys 0m10.638s 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:30:47.381 ************************************ 00:30:47.381 END TEST iscsi_tgt_qos 00:30:47.381 ************************************ 00:30:47.381 17:08:48 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:30:47.381 17:08:48 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:30:47.381 17:08:48 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:47.381 17:08:48 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.381 17:08:48 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:47.381 ************************************ 00:30:47.381 START TEST iscsi_tgt_ip_migration 00:30:47.381 ************************************ 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:30:47.381 * Looking for test storage... 00:30:47.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:30:47.381 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:47.381 #define SPDK_CONFIG_H 00:30:47.381 #define SPDK_CONFIG_APPS 1 00:30:47.381 #define SPDK_CONFIG_ARCH native 00:30:47.381 #define SPDK_CONFIG_ASAN 1 00:30:47.381 #undef SPDK_CONFIG_AVAHI 00:30:47.381 #undef SPDK_CONFIG_CET 00:30:47.381 #define SPDK_CONFIG_COVERAGE 1 00:30:47.381 #define SPDK_CONFIG_CROSS_PREFIX 00:30:47.381 #undef SPDK_CONFIG_CRYPTO 00:30:47.381 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:47.381 #undef SPDK_CONFIG_CUSTOMOCF 00:30:47.381 #undef SPDK_CONFIG_DAOS 00:30:47.381 #define SPDK_CONFIG_DAOS_DIR 00:30:47.381 #define SPDK_CONFIG_DEBUG 1 00:30:47.381 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:47.381 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:47.381 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:47.381 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:47.381 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:47.381 #undef SPDK_CONFIG_DPDK_UADK 00:30:47.381 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:47.381 #define SPDK_CONFIG_EXAMPLES 1 00:30:47.381 #undef SPDK_CONFIG_FC 00:30:47.381 #define SPDK_CONFIG_FC_PATH 00:30:47.381 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:47.381 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:47.381 #undef SPDK_CONFIG_FUSE 00:30:47.381 #undef SPDK_CONFIG_FUZZER 00:30:47.381 #define SPDK_CONFIG_FUZZER_LIB 00:30:47.381 #undef SPDK_CONFIG_GOLANG 00:30:47.381 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:47.381 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:47.381 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:47.381 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:47.381 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:47.381 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:47.381 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:47.381 #define SPDK_CONFIG_IDXD 1 00:30:47.381 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:47.381 #undef SPDK_CONFIG_IPSEC_MB 00:30:47.381 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:47.381 #define SPDK_CONFIG_ISAL 1 00:30:47.381 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:47.381 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:47.381 #define SPDK_CONFIG_LIBDIR 00:30:47.381 #undef SPDK_CONFIG_LTO 00:30:47.381 #define SPDK_CONFIG_MAX_LCORES 128 00:30:47.381 #define SPDK_CONFIG_NVME_CUSE 1 00:30:47.381 #undef SPDK_CONFIG_OCF 00:30:47.381 #define SPDK_CONFIG_OCF_PATH 00:30:47.381 #define SPDK_CONFIG_OPENSSL_PATH 00:30:47.381 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:47.381 #define SPDK_CONFIG_PGO_DIR 00:30:47.381 #undef SPDK_CONFIG_PGO_USE 00:30:47.381 #define SPDK_CONFIG_PREFIX /usr/local 00:30:47.381 #undef SPDK_CONFIG_RAID5F 00:30:47.381 #define SPDK_CONFIG_RBD 1 00:30:47.381 #define SPDK_CONFIG_RDMA 1 00:30:47.381 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:47.381 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:47.381 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:47.381 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:47.381 #define SPDK_CONFIG_SHARED 1 00:30:47.381 #undef SPDK_CONFIG_SMA 00:30:47.381 #define SPDK_CONFIG_TESTS 1 00:30:47.381 #undef SPDK_CONFIG_TSAN 00:30:47.381 #define SPDK_CONFIG_UBLK 1 00:30:47.381 #define SPDK_CONFIG_UBSAN 1 00:30:47.381 #undef SPDK_CONFIG_UNIT_TESTS 00:30:47.381 #undef SPDK_CONFIG_URING 00:30:47.381 #define SPDK_CONFIG_URING_PATH 00:30:47.381 #undef SPDK_CONFIG_URING_ZNS 00:30:47.381 #undef SPDK_CONFIG_USDT 00:30:47.381 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:47.381 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:47.381 #undef SPDK_CONFIG_VFIO_USER 00:30:47.381 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:47.381 #define SPDK_CONFIG_VHOST 1 00:30:47.381 #define SPDK_CONFIG_VIRTIO 1 00:30:47.382 #undef SPDK_CONFIG_VTUNE 00:30:47.382 #define SPDK_CONFIG_VTUNE_DIR 00:30:47.382 #define SPDK_CONFIG_WERROR 1 00:30:47.382 #define SPDK_CONFIG_WPDK_DIR 00:30:47.382 #undef SPDK_CONFIG_XNVME 00:30:47.382 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:30:47.382 Running ip migration tests 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=78062 00:30:47.382 Process pid: 78062 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 78062' 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 78062 /var/tmp/spdk0.sock 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 78062 ']' 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:30:47.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.382 17:08:48 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:47.382 [2024-07-22 17:08:48.742877] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:47.382 [2024-07-22 17:08:48.743108] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78062 ] 00:30:47.382 [2024-07-22 17:08:48.919253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.655 [2024-07-22 17:08:49.180605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.245 17:08:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.179 iscsi_tgt is listening. Running tests... 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 Malloc0 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=78102 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 78102' 00:30:49.179 Process pid: 78102 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 78102 /var/tmp/spdk1.sock 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 78102 ']' 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:49.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:49.179 17:08:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:49.179 [2024-07-22 17:08:50.786569] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:49.179 [2024-07-22 17:08:50.787863] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78102 ] 00:30:49.437 [2024-07-22 17:08:50.969023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.697 [2024-07-22 17:08:51.287037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.264 17:08:51 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.199 iscsi_tgt is listening. Running tests... 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 Malloc0 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:30:51.199 17:08:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:30:52.574 17:08:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:30:52.574 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:30:52.574 17:08:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:30:53.509 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:30:53.509 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:30:53.509 [2024-07-22 17:08:54.878361] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:30:53.509 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:30:53.510 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:30:53.510 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=78186 00:30:53.510 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:30:53.510 17:08:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:30:53.510 [global] 00:30:53.510 thread=1 00:30:53.510 invalidate=1 00:30:53.510 rw=randrw 00:30:53.510 time_based=1 00:30:53.510 runtime=12 00:30:53.510 ioengine=libaio 00:30:53.510 direct=1 00:30:53.510 bs=4096 00:30:53.510 iodepth=32 00:30:53.510 norandommap=1 00:30:53.510 numjobs=1 00:30:53.510 00:30:53.510 [job0] 00:30:53.510 filename=/dev/sda 00:30:53.510 queue_depth set to 113 (sda) 00:30:53.510 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:30:53.510 fio-3.35 00:30:53.510 Starting 1 thread 00:30:53.510 [2024-07-22 17:08:55.061841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:30:56.822 17:08:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:30:56.822 17:08:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.822 17:08:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:57.756 17:08:59 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.756 17:08:59 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 78062 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:30:59.213 17:09:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 78186 00:31:05.773 [2024-07-22 17:09:07.172760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:05.773 00:31:05.773 job0: (groupid=0, jobs=1): err= 0: pid=78213: Mon Jul 22 17:09:07 2024 00:31:05.773 read: IOPS=7102, BW=27.7MiB/s (29.1MB/s)(333MiB/12001msec) 00:31:05.773 slat (usec): min=2, max=266, avg= 6.66, stdev= 7.31 00:31:05.773 clat (usec): min=162, max=5008.6k, avg=2376.94, stdev=72747.10 00:31:05.773 lat (usec): min=183, max=5008.6k, avg=2383.60, stdev=72747.19 00:31:05.773 clat percentiles (usec): 00:31:05.773 | 1.00th=[ 840], 5.00th=[ 988], 10.00th=[ 1057], 00:31:05.773 | 20.00th=[ 1156], 30.00th=[ 1205], 40.00th=[ 1237], 00:31:05.773 | 50.00th=[ 1287], 60.00th=[ 1336], 70.00th=[ 1401], 00:31:05.773 | 80.00th=[ 1500], 90.00th=[ 1647], 95.00th=[ 1745], 00:31:05.773 | 99.00th=[ 1958], 99.50th=[ 2057], 99.90th=[ 2704], 00:31:05.773 | 99.95th=[ 3163], 99.99th=[4999611] 00:31:05.773 bw ( KiB/s): min=18320, max=51008, per=100.00%, avg=45123.36, stdev=9802.84, samples=14 00:31:05.773 iops : min= 4580, max=12752, avg=11280.79, stdev=2450.69, samples=14 00:31:05.773 write: IOPS=7072, BW=27.6MiB/s (29.0MB/s)(332MiB/12001msec); 0 zone resets 00:31:05.773 slat (usec): min=2, max=391, avg= 6.67, stdev= 7.55 00:31:05.773 clat (usec): min=205, max=5008.5k, avg=2122.08, stdev=64298.36 00:31:05.773 lat (usec): min=242, max=5008.5k, avg=2128.75, stdev=64298.42 00:31:05.773 clat percentiles (usec): 00:31:05.773 | 1.00th=[ 816], 5.00th=[ 955], 10.00th=[ 1020], 00:31:05.773 | 20.00th=[ 1106], 30.00th=[ 1156], 40.00th=[ 1205], 00:31:05.773 | 50.00th=[ 1254], 60.00th=[ 1319], 70.00th=[ 1385], 00:31:05.773 | 80.00th=[ 1500], 90.00th=[ 1631], 95.00th=[ 1745], 00:31:05.773 | 99.00th=[ 1942], 99.50th=[ 2040], 99.90th=[ 2442], 00:31:05.773 | 99.95th=[ 3097], 99.99th=[4999611] 00:31:05.773 bw ( KiB/s): min=18432, max=51200, per=100.00%, avg=44909.00, stdev=9656.07, samples=14 00:31:05.773 iops : min= 4608, max=12800, avg=11227.36, stdev=2414.07, samples=14 00:31:05.773 lat (usec) : 250=0.01%, 500=0.02%, 750=0.25%, 1000=6.58% 00:31:05.773 lat (msec) : 2=92.44%, 4=0.70%, 10=0.01%, >=2000=0.02% 00:31:05.773 cpu : usr=4.85%, sys=8.72%, ctx=12869, majf=0, minf=1 00:31:05.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:31:05.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:05.773 issued rwts: total=85242,84876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.773 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:05.773 00:31:05.773 Run status group 0 (all jobs): 00:31:05.773 READ: bw=27.7MiB/s (29.1MB/s), 27.7MiB/s-27.7MiB/s (29.1MB/s-29.1MB/s), io=333MiB (349MB), run=12001-12001msec 00:31:05.773 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=332MiB (348MB), run=12001-12001msec 00:31:05.773 00:31:05.773 Disk stats (read/write): 00:31:05.773 sda: ios=83857/83498, merge=0/0, ticks=190551/171822, in_queue=362374, util=99.37% 00:31:05.773 Cleaning up iSCSI connection 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:05.773 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:31:05.773 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.773 17:09:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:31:07.150 17:09:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.150 17:09:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 78102 00:31:08.531 17:09:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:31:08.531 17:09:09 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:08.531 00:31:08.531 real 0m21.416s 00:31:08.531 user 0m29.714s 00:31:08.531 sys 0m3.827s 00:31:08.532 17:09:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:08.532 17:09:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:31:08.532 ************************************ 00:31:08.532 END TEST iscsi_tgt_ip_migration 00:31:08.532 ************************************ 00:31:08.532 17:09:09 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:31:08.532 17:09:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:31:08.532 17:09:09 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:08.532 17:09:09 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.532 17:09:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:08.532 ************************************ 00:31:08.532 START TEST iscsi_tgt_trace_record 00:31:08.532 ************************************ 00:31:08.532 17:09:09 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:31:08.532 * Looking for test storage... 00:31:08.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:31:08.532 start iscsi_tgt with trace enabled 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=78431 00:31:08.532 Process pid: 78431 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 78431' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 78431 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 78431 ']' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.532 17:09:10 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:31:08.790 [2024-07-22 17:09:10.202222] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:08.790 [2024-07-22 17:09:10.202541] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78431 ] 00:31:08.790 [2024-07-22 17:09:10.385263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.357 [2024-07-22 17:09:10.706150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:31:09.357 [2024-07-22 17:09:10.706221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 78431' to capture a snapshot of events at runtime. 00:31:09.357 [2024-07-22 17:09:10.706242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.357 [2024-07-22 17:09:10.706256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.357 [2024-07-22 17:09:10.706296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid78431 for offline analysis/debug. 00:31:09.357 [2024-07-22 17:09:10.706516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.357 [2024-07-22 17:09:10.706757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.357 [2024-07-22 17:09:10.707490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.357 [2024-07-22 17:09:10.707713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:31:10.292 iscsi_tgt is listening. Running tests... 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=78467 00:31:10.292 Trace record pid: 78467 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 78467' 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 78431 -f ./tmp-trace/record.trace -q 00:31:10.292 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:31:10.293 Create bdevs and target nodes 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.293 17:09:11 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:31:12.216 Malloc0 00:31:12.216 Malloc1 00:31:12.216 Malloc2 00:31:12.216 Malloc3 00:31:12.216 Malloc4 00:31:12.216 Malloc5 00:31:12.216 Malloc6 00:31:12.216 Malloc7 00:31:12.216 Malloc8 00:31:12.216 Malloc9 00:31:12.216 Malloc10 00:31:12.216 Malloc11 00:31:12.216 Malloc12 00:31:12.216 Malloc13 00:31:12.216 Malloc14 00:31:12.216 Malloc15 00:31:12.216 17:09:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:31:12.783 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:31:12.783 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:31:12.783 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:31:13.042 [2024-07-22 17:09:14.420135] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.444080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.448889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.482796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.519738] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.524815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.540935] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.593070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.613854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.042 [2024-07-22 17:09:14.648765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 [2024-07-22 17:09:14.684163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 [2024-07-22 17:09:14.711674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 [2024-07-22 17:09:14.731796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 [2024-07-22 17:09:14.750348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 [2024-07-22 17:09:14.792134] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:31:13.339 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:31:13.340 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:31:13.340 [2024-07-22 17:09:14.803480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.340 Running FIO 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:31:13.340 17:09:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:31:13.340 [global] 00:31:13.340 thread=1 00:31:13.340 invalidate=1 00:31:13.340 rw=randrw 00:31:13.340 time_based=1 00:31:13.340 runtime=1 00:31:13.340 ioengine=libaio 00:31:13.340 direct=1 00:31:13.340 bs=131072 00:31:13.340 iodepth=32 00:31:13.340 norandommap=1 00:31:13.340 numjobs=1 00:31:13.340 00:31:13.612 [job0] 00:31:13.612 filename=/dev/sda 00:31:13.612 [job1] 00:31:13.612 filename=/dev/sdb 00:31:13.612 [job2] 00:31:13.612 filename=/dev/sdc 00:31:13.612 [job3] 00:31:13.612 filename=/dev/sde 00:31:13.612 [job4] 00:31:13.612 filename=/dev/sdd 00:31:13.612 [job5] 00:31:13.612 filename=/dev/sdf 00:31:13.612 [job6] 00:31:13.612 filename=/dev/sdg 00:31:13.612 [job7] 00:31:13.612 filename=/dev/sdh 00:31:13.612 [job8] 00:31:13.612 filename=/dev/sdi 00:31:13.612 [job9] 00:31:13.612 filename=/dev/sdj 00:31:13.612 [job10] 00:31:13.612 filename=/dev/sdk 00:31:13.612 [job11] 00:31:13.612 filename=/dev/sdl 00:31:13.612 [job12] 00:31:13.612 filename=/dev/sdm 00:31:13.612 [job13] 00:31:13.612 filename=/dev/sdn 00:31:13.612 [job14] 00:31:13.612 filename=/dev/sdo 00:31:13.612 [job15] 00:31:13.612 filename=/dev/sdp 00:31:13.612 queue_depth set to 113 (sda) 00:31:13.612 queue_depth set to 113 (sdb) 00:31:13.612 queue_depth set to 113 (sdc) 00:31:13.612 queue_depth set to 113 (sde) 00:31:13.870 queue_depth set to 113 (sdd) 00:31:13.870 queue_depth set to 113 (sdf) 00:31:13.870 queue_depth set to 113 (sdg) 00:31:13.870 queue_depth set to 113 (sdh) 00:31:13.870 queue_depth set to 113 (sdi) 00:31:13.870 queue_depth set to 113 (sdj) 00:31:13.870 queue_depth set to 113 (sdk) 00:31:13.870 queue_depth set to 113 (sdl) 00:31:13.870 queue_depth set to 113 (sdm) 00:31:13.870 queue_depth set to 113 (sdn) 00:31:13.870 queue_depth set to 113 (sdo) 00:31:13.870 queue_depth set to 113 (sdp) 00:31:14.130 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:31:14.130 fio-3.35 00:31:14.130 Starting 16 threads 00:31:14.130 [2024-07-22 17:09:15.583637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.588203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.592182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.595390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.598155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.600892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.603940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.608084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.610962] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.130 [2024-07-22 17:09:15.613813] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.616960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.619827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.622733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.625241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.628238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:14.131 [2024-07-22 17:09:15.631644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.964390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.968632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.971694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.974918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.977728] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.980455] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.982889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.985252] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.987636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.990358] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.992866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.996007] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:16.999881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:17.003737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:17.006328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 [2024-07-22 17:09:17.008508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:15.506 00:31:15.506 job0: (groupid=0, jobs=1): err= 0: pid=78848: Mon Jul 22 17:09:17 2024 00:31:15.506 read: IOPS=393, BW=49.2MiB/s (51.6MB/s)(51.2MiB/1042msec) 00:31:15.506 slat (usec): min=7, max=2309, avg=37.39, stdev=133.09 00:31:15.506 clat (usec): min=1098, max=51039, avg=10085.18, stdev=4203.91 00:31:15.506 lat (usec): min=1113, max=51067, avg=10122.57, stdev=4197.30 00:31:15.506 clat percentiles (usec): 00:31:15.506 | 1.00th=[ 1844], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9110], 00:31:15.506 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:31:15.506 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11731], 00:31:15.506 | 99.00th=[18482], 99.50th=[46400], 99.90th=[51119], 99.95th=[51119], 00:31:15.506 | 99.99th=[51119] 00:31:15.506 bw ( KiB/s): min=50331, max=53611, per=6.31%, avg=51971.00, stdev=2319.31, samples=2 00:31:15.506 iops : min= 393, max= 418, avg=405.50, stdev=17.68, samples=2 00:31:15.506 write: IOPS=415, BW=51.9MiB/s (54.5MB/s)(54.1MiB/1042msec); 0 zone resets 00:31:15.506 slat (usec): min=9, max=1560, avg=38.00, stdev=95.69 00:31:15.506 clat (msec): min=5, max=114, avg=67.25, stdev= 9.97 00:31:15.506 lat (msec): min=5, max=114, avg=67.29, stdev= 9.97 00:31:15.506 clat percentiles (msec): 00:31:15.506 | 1.00th=[ 23], 5.00th=[ 56], 10.00th=[ 62], 20.00th=[ 64], 00:31:15.506 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:31:15.506 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 75], 00:31:15.506 | 99.00th=[ 101], 99.50th=[ 110], 99.90th=[ 115], 99.95th=[ 115], 00:31:15.506 | 99.99th=[ 115] 00:31:15.506 bw ( KiB/s): min=51815, max=52119, per=6.22%, avg=51967.00, stdev=214.96, samples=2 00:31:15.506 iops : min= 404, max= 407, avg=405.50, stdev= 2.12, samples=2 00:31:15.506 lat (msec) : 2=0.59%, 4=1.07%, 10=28.00%, 20=18.86%, 50=1.78% 00:31:15.506 lat (msec) : 100=49.11%, 250=0.59% 00:31:15.506 cpu : usr=0.67%, sys=1.63%, ctx=797, majf=0, minf=1 00:31:15.506 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:31:15.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.506 issued rwts: total=410,433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.506 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.506 job1: (groupid=0, jobs=1): err= 0: pid=78849: Mon Jul 22 17:09:17 2024 00:31:15.506 read: IOPS=358, BW=44.8MiB/s (47.0MB/s)(46.9MiB/1046msec) 00:31:15.506 slat (usec): min=8, max=427, avg=23.04, stdev=34.30 00:31:15.506 clat (usec): min=2152, max=50677, avg=10448.17, stdev=3853.46 00:31:15.506 lat (usec): min=2195, max=50706, avg=10471.21, stdev=3852.88 00:31:15.506 clat percentiles (usec): 00:31:15.506 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:15.506 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:31:15.506 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[13829], 00:31:15.506 | 99.00th=[27395], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:31:15.506 | 99.99th=[50594] 00:31:15.506 bw ( KiB/s): min=47520, max=47616, per=5.78%, avg=47568.00, stdev=67.88, samples=2 00:31:15.506 iops : min= 371, max= 372, avg=371.50, stdev= 0.71, samples=2 00:31:15.506 write: IOPS=411, BW=51.4MiB/s (53.9MB/s)(53.8MiB/1046msec); 0 zone resets 00:31:15.506 slat (usec): min=9, max=517, avg=26.80, stdev=34.29 00:31:15.506 clat (msec): min=20, max=116, avg=68.53, stdev= 8.94 00:31:15.506 lat (msec): min=20, max=116, avg=68.56, stdev= 8.95 00:31:15.506 clat percentiles (msec): 00:31:15.506 | 1.00th=[ 36], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 65], 00:31:15.506 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:31:15.506 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 79], 00:31:15.506 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 116], 99.95th=[ 116], 00:31:15.506 | 99.99th=[ 116] 00:31:15.506 bw ( KiB/s): min=49152, max=53397, per=6.13%, avg=51274.50, stdev=3001.67, samples=2 00:31:15.506 iops : min= 384, max= 417, avg=400.50, stdev=23.33, samples=2 00:31:15.506 lat (msec) : 4=0.12%, 10=24.60%, 20=21.37%, 50=1.74%, 100=51.55% 00:31:15.506 lat (msec) : 250=0.62% 00:31:15.506 cpu : usr=0.29%, sys=1.63%, ctx=763, majf=0, minf=1 00:31:15.506 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.1%, >=64=0.0% 00:31:15.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.506 issued rwts: total=375,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.506 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.506 job2: (groupid=0, jobs=1): err= 0: pid=78850: Mon Jul 22 17:09:17 2024 00:31:15.506 read: IOPS=413, BW=51.6MiB/s (54.2MB/s)(54.1MiB/1048msec) 00:31:15.506 slat (usec): min=7, max=771, avg=26.26, stdev=48.90 00:31:15.506 clat (usec): min=1011, max=56025, avg=10647.54, stdev=3852.57 00:31:15.507 lat (usec): min=1024, max=56046, avg=10673.80, stdev=3852.64 00:31:15.507 clat percentiles (usec): 00:31:15.507 | 1.00th=[ 2835], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:31:15.507 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:31:15.507 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 00:31:15.507 | 99.00th=[15926], 99.50th=[48497], 99.90th=[55837], 99.95th=[55837], 00:31:15.507 | 99.99th=[55837] 00:31:15.507 bw ( KiB/s): min=55040, max=55150, per=6.69%, avg=55095.00, stdev=77.78, samples=2 00:31:15.507 iops : min= 430, max= 430, avg=430.00, stdev= 0.00, samples=2 00:31:15.507 write: IOPS=396, BW=49.6MiB/s (52.0MB/s)(52.0MiB/1048msec); 0 zone resets 00:31:15.507 slat (usec): min=9, max=944, avg=33.99, stdev=72.33 00:31:15.507 clat (msec): min=4, max=117, avg=69.27, stdev=11.82 00:31:15.507 lat (msec): min=4, max=117, avg=69.30, stdev=11.82 00:31:15.507 clat percentiles (msec): 00:31:15.507 | 1.00th=[ 20], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 66], 00:31:15.507 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 72], 00:31:15.507 | 70.00th=[ 73], 80.00th=[ 74], 90.00th=[ 78], 95.00th=[ 81], 00:31:15.507 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 118], 99.95th=[ 118], 00:31:15.507 | 99.99th=[ 118] 00:31:15.507 bw ( KiB/s): min=48993, max=50176, per=5.93%, avg=49584.50, stdev=836.51, samples=2 00:31:15.507 iops : min= 382, max= 392, avg=387.00, stdev= 7.07, samples=2 00:31:15.507 lat (msec) : 2=0.12%, 4=1.30%, 10=19.20%, 20=30.62%, 50=1.53% 00:31:15.507 lat (msec) : 100=46.41%, 250=0.82% 00:31:15.507 cpu : usr=0.86%, sys=1.43%, ctx=802, majf=0, minf=1 00:31:15.507 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:31:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.507 issued rwts: total=433,416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.507 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.507 job3: (groupid=0, jobs=1): err= 0: pid=78867: Mon Jul 22 17:09:17 2024 00:31:15.507 read: IOPS=411, BW=51.4MiB/s (53.9MB/s)(53.8MiB/1045msec) 00:31:15.507 slat (usec): min=7, max=900, avg=30.19, stdev=70.16 00:31:15.507 clat (usec): min=2260, max=51635, avg=9811.76, stdev=3015.87 00:31:15.507 lat (usec): min=2270, max=51656, avg=9841.95, stdev=3013.66 00:31:15.507 clat percentiles (usec): 00:31:15.507 | 1.00th=[ 4883], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8848], 00:31:15.507 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:31:15.507 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:31:15.507 | 99.00th=[16450], 99.50th=[18744], 99.90th=[51643], 99.95th=[51643], 00:31:15.507 | 99.99th=[51643] 00:31:15.507 bw ( KiB/s): min=52224, max=57344, per=6.65%, avg=54784.00, stdev=3620.39, samples=2 00:31:15.507 iops : min= 408, max= 448, avg=428.00, stdev=28.28, samples=2 00:31:15.507 write: IOPS=426, BW=53.3MiB/s (55.9MB/s)(55.8MiB/1045msec); 0 zone resets 00:31:15.507 slat (usec): min=8, max=517, avg=36.25, stdev=55.47 00:31:15.507 clat (msec): min=11, max=109, avg=65.31, stdev= 9.89 00:31:15.507 lat (msec): min=11, max=109, avg=65.34, stdev= 9.89 00:31:15.507 clat percentiles (msec): 00:31:15.507 | 1.00th=[ 25], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 62], 00:31:15.507 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 67], 00:31:15.507 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 75], 00:31:15.507 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 110], 99.95th=[ 110], 00:31:15.507 | 99.99th=[ 110] 00:31:15.507 bw ( KiB/s): min=52736, max=54016, per=6.38%, avg=53376.00, stdev=905.10, samples=2 00:31:15.507 iops : min= 412, max= 422, avg=417.00, stdev= 7.07, samples=2 00:31:15.507 lat (msec) : 4=0.23%, 10=32.42%, 20=16.55%, 50=1.83%, 100=48.52% 00:31:15.507 lat (msec) : 250=0.46% 00:31:15.507 cpu : usr=0.67%, sys=1.82%, ctx=815, majf=0, minf=1 00:31:15.507 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:31:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.507 issued rwts: total=430,446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.507 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.507 job4: (groupid=0, jobs=1): err= 0: pid=78872: Mon Jul 22 17:09:17 2024 00:31:15.507 read: IOPS=400, BW=50.0MiB/s (52.5MB/s)(52.6MiB/1052msec) 00:31:15.507 slat (usec): min=8, max=490, avg=33.44, stdev=61.51 00:31:15.507 clat (usec): min=6813, max=57034, avg=10437.27, stdev=3584.15 00:31:15.507 lat (usec): min=6841, max=57284, avg=10470.71, stdev=3587.41 00:31:15.507 clat percentiles (usec): 00:31:15.507 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:15.507 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:31:15.507 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[14353], 00:31:15.507 | 99.00th=[19006], 99.50th=[19006], 99.90th=[56886], 99.95th=[56886], 00:31:15.507 | 99.99th=[56886] 00:31:15.507 bw ( KiB/s): min=48287, max=58880, per=6.51%, avg=53583.50, stdev=7490.38, samples=2 00:31:15.507 iops : min= 377, max= 460, avg=418.50, stdev=58.69, samples=2 00:31:15.507 write: IOPS=416, BW=52.0MiB/s (54.6MB/s)(54.8MiB/1052msec); 0 zone resets 00:31:15.507 slat (usec): min=9, max=6051, avg=65.36, stdev=322.19 00:31:15.507 clat (msec): min=6, max=117, avg=66.10, stdev=13.26 00:31:15.507 lat (msec): min=6, max=117, avg=66.17, stdev=13.17 00:31:15.507 clat percentiles (msec): 00:31:15.507 | 1.00th=[ 9], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 62], 00:31:15.507 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:31:15.507 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 79], 00:31:15.507 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 118], 99.95th=[ 118], 00:31:15.507 | 99.99th=[ 118] 00:31:15.507 bw ( KiB/s): min=52119, max=52224, per=6.24%, avg=52171.50, stdev=74.25, samples=2 00:31:15.507 iops : min= 407, max= 408, avg=407.50, stdev= 0.71, samples=2 00:31:15.507 lat (msec) : 10=28.29%, 20=21.65%, 50=1.16%, 100=47.96%, 250=0.93% 00:31:15.507 cpu : usr=0.38%, sys=2.09%, ctx=851, majf=0, minf=1 00:31:15.507 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.4%, >=64=0.0% 00:31:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.507 issued rwts: total=421,438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.507 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.507 job5: (groupid=0, jobs=1): err= 0: pid=78873: Mon Jul 22 17:09:17 2024 00:31:15.507 read: IOPS=422, BW=52.8MiB/s (55.3MB/s)(55.6MiB/1054msec) 00:31:15.507 slat (usec): min=7, max=635, avg=25.28, stdev=49.67 00:31:15.507 clat (usec): min=617, max=61737, avg=10122.53, stdev=3331.23 00:31:15.507 lat (usec): min=633, max=61749, avg=10147.80, stdev=3329.49 00:31:15.507 clat percentiles (usec): 00:31:15.507 | 1.00th=[ 4424], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:31:15.507 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:31:15.507 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:31:15.507 | 99.00th=[25035], 99.50th=[25297], 99.90th=[61604], 99.95th=[61604], 00:31:15.507 | 99.99th=[61604] 00:31:15.507 bw ( KiB/s): min=53908, max=59767, per=6.90%, avg=56837.50, stdev=4142.94, samples=2 00:31:15.507 iops : min= 421, max= 466, avg=443.50, stdev=31.82, samples=2 00:31:15.507 write: IOPS=410, BW=51.4MiB/s (53.8MB/s)(54.1MiB/1054msec); 0 zone resets 00:31:15.507 slat (usec): min=8, max=4323, avg=44.49, stdev=218.80 00:31:15.507 clat (msec): min=4, max=114, avg=66.89, stdev=10.79 00:31:15.507 lat (msec): min=4, max=114, avg=66.93, stdev=10.73 00:31:15.507 clat percentiles (msec): 00:31:15.507 | 1.00th=[ 22], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 63], 00:31:15.507 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 69], 00:31:15.507 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 79], 00:31:15.507 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:31:15.507 | 99.99th=[ 114] 00:31:15.507 bw ( KiB/s): min=49507, max=53397, per=6.15%, avg=51452.00, stdev=2750.65, samples=2 00:31:15.507 iops : min= 386, max= 417, avg=401.50, stdev=21.92, samples=2 00:31:15.507 lat (usec) : 750=0.11% 00:31:15.507 lat (msec) : 10=31.09%, 20=19.02%, 50=1.94%, 100=46.92%, 250=0.91% 00:31:15.507 cpu : usr=0.85%, sys=1.33%, ctx=814, majf=0, minf=1 00:31:15.507 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:31:15.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.507 issued rwts: total=445,433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.507 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.507 job6: (groupid=0, jobs=1): err= 0: pid=78880: Mon Jul 22 17:09:17 2024 00:31:15.507 read: IOPS=393, BW=49.2MiB/s (51.5MB/s)(51.1MiB/1040msec) 00:31:15.507 slat (usec): min=8, max=460, avg=24.82, stdev=38.13 00:31:15.507 clat (usec): min=2372, max=47592, avg=10497.29, stdev=3243.33 00:31:15.507 lat (usec): min=2382, max=47620, avg=10522.11, stdev=3243.95 00:31:15.508 clat percentiles (usec): 00:31:15.508 | 1.00th=[ 4555], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9372], 00:31:15.508 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:31:15.508 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12256], 00:31:15.508 | 99.00th=[13304], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:31:15.508 | 99.99th=[47449] 00:31:15.508 bw ( KiB/s): min=50176, max=53867, per=6.32%, avg=52021.50, stdev=2609.93, samples=2 00:31:15.508 iops : min= 392, max= 420, avg=406.00, stdev=19.80, samples=2 00:31:15.508 write: IOPS=399, BW=49.9MiB/s (52.3MB/s)(51.9MiB/1040msec); 0 zone resets 00:31:15.508 slat (usec): min=8, max=772, avg=35.74, stdev=66.05 00:31:15.508 clat (msec): min=11, max=104, avg=69.65, stdev=11.62 00:31:15.508 lat (msec): min=11, max=104, avg=69.69, stdev=11.62 00:31:15.508 clat percentiles (msec): 00:31:15.508 | 1.00th=[ 25], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 64], 00:31:15.508 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 72], 00:31:15.508 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 82], 95.00th=[ 88], 00:31:15.508 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 105], 99.95th=[ 105], 00:31:15.508 | 99.99th=[ 105] 00:31:15.508 bw ( KiB/s): min=48896, max=50020, per=5.92%, avg=49458.00, stdev=794.79, samples=2 00:31:15.508 iops : min= 382, max= 390, avg=386.00, stdev= 5.66, samples=2 00:31:15.508 lat (msec) : 4=0.36%, 10=21.48%, 20=27.91%, 50=1.82%, 100=48.18% 00:31:15.508 lat (msec) : 250=0.24% 00:31:15.508 cpu : usr=0.67%, sys=1.64%, ctx=772, majf=0, minf=1 00:31:15.508 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=96.2%, >=64=0.0% 00:31:15.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.508 issued rwts: total=409,415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.508 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.508 job7: (groupid=0, jobs=1): err= 0: pid=78952: Mon Jul 22 17:09:17 2024 00:31:15.508 read: IOPS=382, BW=47.8MiB/s (50.2MB/s)(50.1MiB/1048msec) 00:31:15.508 slat (usec): min=8, max=1404, avg=30.10, stdev=91.19 00:31:15.508 clat (usec): min=3814, max=52203, avg=9927.60, stdev=3820.09 00:31:15.508 lat (usec): min=3824, max=52220, avg=9957.70, stdev=3815.27 00:31:15.508 clat percentiles (usec): 00:31:15.508 | 1.00th=[ 3851], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8717], 00:31:15.508 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9896], 00:31:15.508 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11600], 00:31:15.508 | 99.00th=[17957], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:31:15.508 | 99.99th=[52167] 00:31:15.508 bw ( KiB/s): min=46848, max=55040, per=6.19%, avg=50944.00, stdev=5792.62, samples=2 00:31:15.508 iops : min= 366, max= 430, avg=398.00, stdev=45.25, samples=2 00:31:15.508 write: IOPS=423, BW=53.0MiB/s (55.5MB/s)(55.5MiB/1048msec); 0 zone resets 00:31:15.508 slat (usec): min=9, max=631, avg=34.57, stdev=57.83 00:31:15.508 clat (msec): min=14, max=123, avg=66.30, stdev=10.52 00:31:15.508 lat (msec): min=14, max=123, avg=66.33, stdev=10.52 00:31:15.508 clat percentiles (msec): 00:31:15.508 | 1.00th=[ 27], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 62], 00:31:15.508 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:31:15.508 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 78], 00:31:15.508 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 125], 00:31:15.508 | 99.99th=[ 125] 00:31:15.508 bw ( KiB/s): min=52480, max=54016, per=6.37%, avg=53248.00, stdev=1086.12, samples=2 00:31:15.508 iops : min= 410, max= 422, avg=416.00, stdev= 8.49, samples=2 00:31:15.508 lat (msec) : 4=0.59%, 10=30.89%, 20=15.98%, 50=1.42%, 100=50.30% 00:31:15.508 lat (msec) : 250=0.83% 00:31:15.508 cpu : usr=0.86%, sys=1.53%, ctx=783, majf=0, minf=1 00:31:15.508 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:31:15.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.508 issued rwts: total=401,444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.508 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.508 job8: (groupid=0, jobs=1): err= 0: pid=78957: Mon Jul 22 17:09:17 2024 00:31:15.508 read: IOPS=370, BW=46.3MiB/s (48.5MB/s)(48.2MiB/1043msec) 00:31:15.508 slat (usec): min=8, max=1995, avg=31.84, stdev=124.87 00:31:15.508 clat (usec): min=3193, max=51317, avg=10297.77, stdev=3952.37 00:31:15.508 lat (usec): min=4578, max=51351, avg=10329.61, stdev=3942.26 00:31:15.508 clat percentiles (usec): 00:31:15.508 | 1.00th=[ 6718], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:31:15.508 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:31:15.508 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11863], 00:31:15.508 | 99.00th=[43779], 99.50th=[46400], 99.90th=[51119], 99.95th=[51119], 00:31:15.508 | 99.99th=[51119] 00:31:15.508 bw ( KiB/s): min=46336, max=51456, per=5.94%, avg=48896.00, stdev=3620.39, samples=2 00:31:15.508 iops : min= 362, max= 402, avg=382.00, stdev=28.28, samples=2 00:31:15.508 write: IOPS=414, BW=51.8MiB/s (54.3MB/s)(54.0MiB/1043msec); 0 zone resets 00:31:15.508 slat (usec): min=9, max=623, avg=35.11, stdev=56.17 00:31:15.508 clat (msec): min=6, max=111, avg=67.59, stdev=10.13 00:31:15.508 lat (msec): min=6, max=111, avg=67.63, stdev=10.12 00:31:15.508 clat percentiles (msec): 00:31:15.508 | 1.00th=[ 22], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:31:15.508 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 68], 60.00th=[ 69], 00:31:15.508 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 78], 00:31:15.508 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 112], 00:31:15.508 | 99.99th=[ 112] 00:31:15.508 bw ( KiB/s): min=51456, max=52224, per=6.20%, avg=51840.00, stdev=543.06, samples=2 00:31:15.508 iops : min= 402, max= 408, avg=405.00, stdev= 4.24, samples=2 00:31:15.508 lat (msec) : 4=0.12%, 10=26.28%, 20=20.78%, 50=1.96%, 100=50.12% 00:31:15.508 lat (msec) : 250=0.73% 00:31:15.508 cpu : usr=0.77%, sys=1.15%, ctx=825, majf=0, minf=1 00:31:15.508 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0% 00:31:15.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.508 issued rwts: total=386,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.508 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.508 job9: (groupid=0, jobs=1): err= 0: pid=78963: Mon Jul 22 17:09:17 2024 00:31:15.508 read: IOPS=444, BW=55.6MiB/s (58.3MB/s)(58.0MiB/1043msec) 00:31:15.508 slat (usec): min=9, max=1577, avg=34.73, stdev=110.65 00:31:15.508 clat (usec): min=7479, max=50048, avg=10548.45, stdev=3803.44 00:31:15.508 lat (usec): min=7502, max=50068, avg=10583.18, stdev=3799.42 00:31:15.508 clat percentiles (usec): 00:31:15.508 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:15.508 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:31:15.508 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11338], 95.00th=[13304], 00:31:15.508 | 99.00th=[17957], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:31:15.508 | 99.99th=[50070] 00:31:15.508 bw ( KiB/s): min=53504, max=64256, per=7.15%, avg=58880.00, stdev=7602.81, samples=2 00:31:15.508 iops : min= 418, max= 502, avg=460.00, stdev=59.40, samples=2 00:31:15.508 write: IOPS=412, BW=51.5MiB/s (54.0MB/s)(53.8MiB/1043msec); 0 zone resets 00:31:15.508 slat (usec): min=10, max=2528, avg=39.94, stdev=141.65 00:31:15.508 clat (msec): min=18, max=105, avg=65.95, stdev= 9.70 00:31:15.508 lat (msec): min=18, max=105, avg=65.99, stdev= 9.70 00:31:15.508 clat percentiles (msec): 00:31:15.508 | 1.00th=[ 32], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 62], 00:31:15.508 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:31:15.508 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 78], 00:31:15.508 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 106], 99.95th=[ 106], 00:31:15.508 | 99.99th=[ 106] 00:31:15.508 bw ( KiB/s): min=49408, max=53504, per=6.15%, avg=51456.00, stdev=2896.31, samples=2 00:31:15.508 iops : min= 386, max= 418, avg=402.00, stdev=22.63, samples=2 00:31:15.508 lat (msec) : 10=25.17%, 20=26.51%, 50=2.35%, 100=45.64%, 250=0.34% 00:31:15.508 cpu : usr=1.25%, sys=1.25%, ctx=842, majf=0, minf=1 00:31:15.508 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:31:15.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.508 issued rwts: total=464,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.508 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.508 job10: (groupid=0, jobs=1): err= 0: pid=78964: Mon Jul 22 17:09:17 2024 00:31:15.508 read: IOPS=414, BW=51.8MiB/s (54.3MB/s)(54.2MiB/1047msec) 00:31:15.509 slat (usec): min=8, max=830, avg=25.72, stdev=57.78 00:31:15.509 clat (usec): min=3256, max=54335, avg=10795.26, stdev=4482.38 00:31:15.509 lat (usec): min=3267, max=54364, avg=10820.98, stdev=4481.92 00:31:15.509 clat percentiles (usec): 00:31:15.509 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:31:15.509 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:31:15.509 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12387], 00:31:15.509 | 99.00th=[47449], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:31:15.509 | 99.99th=[54264] 00:31:15.509 bw ( KiB/s): min=48896, max=60928, per=6.67%, avg=54912.00, stdev=8507.91, samples=2 00:31:15.509 iops : min= 382, max= 476, avg=429.00, stdev=66.47, samples=2 00:31:15.509 write: IOPS=394, BW=49.3MiB/s (51.7MB/s)(51.6MiB/1047msec); 0 zone resets 00:31:15.509 slat (usec): min=9, max=473, avg=26.28, stdev=38.04 00:31:15.509 clat (msec): min=13, max=123, avg=69.55, stdev=11.71 00:31:15.509 lat (msec): min=13, max=123, avg=69.57, stdev=11.72 00:31:15.509 clat percentiles (msec): 00:31:15.509 | 1.00th=[ 25], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 64], 00:31:15.509 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 72], 00:31:15.509 | 70.00th=[ 74], 80.00th=[ 75], 90.00th=[ 79], 95.00th=[ 82], 00:31:15.509 | 99.00th=[ 114], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:31:15.509 | 99.99th=[ 124] 00:31:15.509 bw ( KiB/s): min=48896, max=49920, per=5.91%, avg=49408.00, stdev=724.08, samples=2 00:31:15.509 iops : min= 382, max= 390, avg=386.00, stdev= 5.66, samples=2 00:31:15.509 lat (msec) : 4=0.24%, 10=23.26%, 20=27.51%, 50=1.89%, 100=46.40% 00:31:15.509 lat (msec) : 250=0.71% 00:31:15.509 cpu : usr=0.48%, sys=1.53%, ctx=821, majf=0, minf=1 00:31:15.509 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:31:15.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.509 issued rwts: total=434,413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.509 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.509 job11: (groupid=0, jobs=1): err= 0: pid=78965: Mon Jul 22 17:09:17 2024 00:31:15.509 read: IOPS=416, BW=52.1MiB/s (54.7MB/s)(54.6MiB/1048msec) 00:31:15.509 slat (usec): min=8, max=817, avg=25.86, stdev=53.32 00:31:15.509 clat (usec): min=705, max=55462, avg=10444.16, stdev=5649.48 00:31:15.509 lat (usec): min=718, max=55483, avg=10470.02, stdev=5648.10 00:31:15.509 clat percentiles (usec): 00:31:15.509 | 1.00th=[ 1598], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 8979], 00:31:15.509 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:31:15.509 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11076], 95.00th=[12911], 00:31:15.509 | 99.00th=[52691], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:31:15.509 | 99.99th=[55313] 00:31:15.509 bw ( KiB/s): min=49564, max=60416, per=6.68%, avg=54990.00, stdev=7673.52, samples=2 00:31:15.509 iops : min= 387, max= 472, avg=429.50, stdev=60.10, samples=2 00:31:15.509 write: IOPS=417, BW=52.2MiB/s (54.8MB/s)(54.8MiB/1048msec); 0 zone resets 00:31:15.509 slat (usec): min=10, max=6405, avg=41.79, stdev=307.28 00:31:15.509 clat (msec): min=13, max=113, avg=65.47, stdev=10.17 00:31:15.509 lat (msec): min=16, max=113, avg=65.51, stdev=10.11 00:31:15.509 clat percentiles (msec): 00:31:15.509 | 1.00th=[ 26], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 62], 00:31:15.509 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 67], 00:31:15.509 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 75], 00:31:15.509 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:31:15.509 | 99.99th=[ 114] 00:31:15.509 bw ( KiB/s): min=51968, max=53908, per=6.33%, avg=52938.00, stdev=1371.79, samples=2 00:31:15.509 iops : min= 406, max= 421, avg=413.50, stdev=10.61, samples=2 00:31:15.509 lat (usec) : 750=0.11% 00:31:15.509 lat (msec) : 2=0.46%, 4=0.23%, 10=29.49%, 20=19.20%, 50=1.83% 00:31:15.509 lat (msec) : 100=47.77%, 250=0.91% 00:31:15.509 cpu : usr=0.57%, sys=1.81%, ctx=809, majf=0, minf=1 00:31:15.509 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.5%, >=64=0.0% 00:31:15.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.509 issued rwts: total=437,438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.509 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.509 job12: (groupid=0, jobs=1): err= 0: pid=78966: Mon Jul 22 17:09:17 2024 00:31:15.509 read: IOPS=459, BW=57.4MiB/s (60.2MB/s)(59.8MiB/1041msec) 00:31:15.509 slat (usec): min=8, max=309, avg=22.52, stdev=30.12 00:31:15.509 clat (usec): min=3679, max=49359, avg=10298.38, stdev=3947.32 00:31:15.509 lat (usec): min=3694, max=49376, avg=10320.90, stdev=3946.42 00:31:15.509 clat percentiles (usec): 00:31:15.509 | 1.00th=[ 3687], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:15.509 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:31:15.509 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[13435], 00:31:15.509 | 99.00th=[42206], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:31:15.509 | 99.99th=[49546] 00:31:15.509 bw ( KiB/s): min=59392, max=61696, per=7.35%, avg=60544.00, stdev=1629.17, samples=2 00:31:15.509 iops : min= 464, max= 482, avg=473.00, stdev=12.73, samples=2 00:31:15.509 write: IOPS=414, BW=51.8MiB/s (54.3MB/s)(53.9MiB/1041msec); 0 zone resets 00:31:15.509 slat (usec): min=9, max=594, avg=29.71, stdev=52.73 00:31:15.509 clat (msec): min=12, max=107, avg=65.60, stdev= 9.50 00:31:15.509 lat (msec): min=13, max=107, avg=65.63, stdev= 9.50 00:31:15.509 clat percentiles (msec): 00:31:15.509 | 1.00th=[ 27], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 62], 00:31:15.509 | 30.00th=[ 64], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 68], 00:31:15.509 | 70.00th=[ 69], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 74], 00:31:15.509 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 108], 00:31:15.509 | 99.99th=[ 108] 00:31:15.509 bw ( KiB/s): min=51456, max=52224, per=6.20%, avg=51840.00, stdev=543.06, samples=2 00:31:15.509 iops : min= 402, max= 408, avg=405.00, stdev= 4.24, samples=2 00:31:15.509 lat (msec) : 4=0.55%, 10=31.24%, 20=20.57%, 50=2.09%, 100=44.77% 00:31:15.509 lat (msec) : 250=0.77% 00:31:15.509 cpu : usr=0.48%, sys=1.83%, ctx=871, majf=0, minf=1 00:31:15.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.6%, >=64=0.0% 00:31:15.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.509 issued rwts: total=478,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.509 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.509 job13: (groupid=0, jobs=1): err= 0: pid=78967: Mon Jul 22 17:09:17 2024 00:31:15.509 read: IOPS=421, BW=52.7MiB/s (55.2MB/s)(55.0MiB/1044msec) 00:31:15.509 slat (usec): min=8, max=536, avg=23.81, stdev=47.87 00:31:15.509 clat (usec): min=6928, max=51646, avg=10762.16, stdev=4470.55 00:31:15.509 lat (usec): min=6939, max=51681, avg=10785.97, stdev=4469.85 00:31:15.509 clat percentiles (usec): 00:31:15.509 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:31:15.509 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:31:15.509 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[15533], 00:31:15.509 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:31:15.509 | 99.99th=[51643] 00:31:15.509 bw ( KiB/s): min=49507, max=61952, per=6.77%, avg=55729.50, stdev=8799.94, samples=2 00:31:15.509 iops : min= 386, max= 484, avg=435.00, stdev=69.30, samples=2 00:31:15.509 write: IOPS=409, BW=51.2MiB/s (53.7MB/s)(53.5MiB/1044msec); 0 zone resets 00:31:15.509 slat (usec): min=10, max=420, avg=27.31, stdev=38.76 00:31:15.509 clat (msec): min=20, max=114, avg=66.81, stdev= 9.67 00:31:15.509 lat (msec): min=20, max=114, avg=66.84, stdev= 9.67 00:31:15.509 clat percentiles (msec): 00:31:15.509 | 1.00th=[ 35], 5.00th=[ 55], 10.00th=[ 60], 20.00th=[ 63], 00:31:15.509 | 30.00th=[ 64], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 69], 00:31:15.509 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 75], 00:31:15.509 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 114], 00:31:15.509 | 99.99th=[ 114] 00:31:15.509 bw ( KiB/s): min=49152, max=53611, per=6.15%, avg=51381.50, stdev=3152.99, samples=2 00:31:15.509 iops : min= 384, max= 418, avg=401.00, stdev=24.04, samples=2 00:31:15.509 lat (msec) : 10=25.81%, 20=24.19%, 50=2.19%, 100=46.89%, 250=0.92% 00:31:15.509 cpu : usr=0.58%, sys=1.53%, ctx=802, majf=0, minf=1 00:31:15.509 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.4%, >=64=0.0% 00:31:15.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.509 issued rwts: total=440,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.509 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.509 job14: (groupid=0, jobs=1): err= 0: pid=78968: Mon Jul 22 17:09:17 2024 00:31:15.510 read: IOPS=377, BW=47.2MiB/s (49.5MB/s)(49.5MiB/1049msec) 00:31:15.510 slat (usec): min=8, max=496, avg=27.14, stdev=43.73 00:31:15.510 clat (usec): min=951, max=54860, avg=10634.40, stdev=3923.08 00:31:15.510 lat (usec): min=976, max=54878, avg=10661.54, stdev=3922.07 00:31:15.510 clat percentiles (usec): 00:31:15.510 | 1.00th=[ 3752], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:31:15.510 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:31:15.510 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[12518], 00:31:15.510 | 99.00th=[18220], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:31:15.510 | 99.99th=[54789] 00:31:15.510 bw ( KiB/s): min=46592, max=54016, per=6.11%, avg=50304.00, stdev=5249.56, samples=2 00:31:15.510 iops : min= 364, max= 422, avg=393.00, stdev=41.01, samples=2 00:31:15.510 write: IOPS=396, BW=49.6MiB/s (52.0MB/s)(52.0MiB/1049msec); 0 zone resets 00:31:15.510 slat (usec): min=8, max=862, avg=38.14, stdev=83.73 00:31:15.510 clat (msec): min=11, max=116, avg=70.30, stdev=11.49 00:31:15.510 lat (msec): min=11, max=116, avg=70.34, stdev=11.50 00:31:15.510 clat percentiles (msec): 00:31:15.510 | 1.00th=[ 24], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 66], 00:31:15.510 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 72], 00:31:15.510 | 70.00th=[ 74], 80.00th=[ 78], 90.00th=[ 81], 95.00th=[ 85], 00:31:15.510 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 116], 00:31:15.510 | 99.99th=[ 116] 00:31:15.510 bw ( KiB/s): min=48896, max=50176, per=5.92%, avg=49536.00, stdev=905.10, samples=2 00:31:15.510 iops : min= 382, max= 392, avg=387.00, stdev= 7.07, samples=2 00:31:15.510 lat (usec) : 1000=0.12% 00:31:15.510 lat (msec) : 2=0.25%, 4=0.37%, 10=18.60%, 20=29.56%, 50=1.60% 00:31:15.510 lat (msec) : 100=48.77%, 250=0.74% 00:31:15.510 cpu : usr=0.38%, sys=1.72%, ctx=775, majf=0, minf=1 00:31:15.510 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0% 00:31:15.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.510 issued rwts: total=396,416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.510 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.510 job15: (groupid=0, jobs=1): err= 0: pid=78969: Mon Jul 22 17:09:17 2024 00:31:15.510 read: IOPS=405, BW=50.7MiB/s (53.2MB/s)(52.9MiB/1043msec) 00:31:15.510 slat (usec): min=7, max=685, avg=24.13, stdev=50.24 00:31:15.510 clat (usec): min=4873, max=51442, avg=10327.21, stdev=4656.46 00:31:15.510 lat (usec): min=4883, max=51475, avg=10351.34, stdev=4654.86 00:31:15.510 clat percentiles (usec): 00:31:15.510 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:31:15.510 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:31:15.510 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11994], 00:31:15.510 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:31:15.510 | 99.99th=[51643] 00:31:15.510 bw ( KiB/s): min=52224, max=54528, per=6.48%, avg=53376.00, stdev=1629.17, samples=2 00:31:15.510 iops : min= 408, max= 426, avg=417.00, stdev=12.73, samples=2 00:31:15.510 write: IOPS=423, BW=53.0MiB/s (55.5MB/s)(55.2MiB/1043msec); 0 zone resets 00:31:15.510 slat (usec): min=8, max=241, avg=25.94, stdev=18.60 00:31:15.510 clat (msec): min=13, max=104, avg=65.45, stdev= 9.57 00:31:15.510 lat (msec): min=13, max=104, avg=65.47, stdev= 9.58 00:31:15.510 clat percentiles (msec): 00:31:15.510 | 1.00th=[ 27], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 62], 00:31:15.510 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 68], 00:31:15.510 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 77], 00:31:15.510 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 105], 00:31:15.510 | 99.99th=[ 105] 00:31:15.510 bw ( KiB/s): min=52736, max=53760, per=6.37%, avg=53248.00, stdev=724.08, samples=2 00:31:15.510 iops : min= 412, max= 420, avg=416.00, stdev= 5.66, samples=2 00:31:15.510 lat (msec) : 10=30.29%, 20=18.27%, 50=2.20%, 100=48.90%, 250=0.35% 00:31:15.510 cpu : usr=0.48%, sys=1.82%, ctx=768, majf=0, minf=1 00:31:15.510 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=96.4%, >=64=0.0% 00:31:15.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:31:15.510 issued rwts: total=423,442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.510 latency : target=0, window=0, percentile=100.00%, depth=32 00:31:15.510 00:31:15.510 Run status group 0 (all jobs): 00:31:15.510 READ: bw=804MiB/s (843MB/s), 44.8MiB/s-57.4MiB/s (47.0MB/s-60.2MB/s), io=848MiB (889MB), run=1040-1054msec 00:31:15.510 WRITE: bw=817MiB/s (856MB/s), 49.3MiB/s-53.3MiB/s (51.7MB/s-55.9MB/s), io=861MiB (902MB), run=1040-1054msec 00:31:15.510 00:31:15.510 Disk stats (read/write): 00:31:15.510 sda: ios=412/370, merge=0/0, ticks=3496/24572, in_queue=28068, util=75.53% 00:31:15.510 sdb: ios=393/364, merge=0/0, ticks=3518/24658, in_queue=28177, util=76.48% 00:31:15.510 sdc: ios=440/353, merge=0/0, ticks=4030/23923, in_queue=27954, util=76.68% 00:31:15.510 sde: ios=435/379, merge=0/0, ticks=3662/24197, in_queue=27859, util=77.15% 00:31:15.510 sdd: ios=437/376, merge=0/0, ticks=3917/24184, in_queue=28102, util=79.42% 00:31:15.510 sdf: ios=454/369, merge=0/0, ticks=4065/24265, in_queue=28331, util=79.78% 00:31:15.510 sdg: ios=385/349, merge=0/0, ticks=3734/24109, in_queue=27844, util=78.29% 00:31:15.510 sdh: ios=377/380, merge=0/0, ticks=3561/24404, in_queue=27965, util=81.02% 00:31:15.510 sdi: ios=346/368, merge=0/0, ticks=3385/24460, in_queue=27846, util=81.95% 00:31:15.510 sdj: ios=416/364, merge=0/0, ticks=4231/23820, in_queue=28051, util=83.53% 00:31:15.510 sdk: ios=403/350, merge=0/0, ticks=4138/23655, in_queue=27793, util=83.98% 00:31:15.510 sdl: ios=382/381, merge=0/0, ticks=3678/24377, in_queue=28055, util=85.70% 00:31:15.510 sdm: ios=437/369, merge=0/0, ticks=4293/23735, in_queue=28028, util=85.93% 00:31:15.510 sdn: ios=412/365, merge=0/0, ticks=4229/24004, in_queue=28234, util=87.28% 00:31:15.510 sdo: ios=354/351, merge=0/0, ticks=3591/24180, in_queue=27771, util=86.49% 00:31:15.510 sdp: ios=371/380, merge=0/0, ticks=3570/24330, in_queue=27901, util=88.39% 00:31:15.510 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:31:15.510 Cleaning up iSCSI connection 00:31:15.510 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:15.510 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:16.077 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:31:16.077 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:31:16.077 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:31:16.077 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.077 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:31:16.078 17:09:17 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 78431 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 78431 ']' 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 78431 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78431 00:31:19.369 killing process with pid 78431 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78431' 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 78431 00:31:19.369 17:09:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 78431 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 78467 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 78467 ']' 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 78467 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78467 00:31:21.921 killing process with pid 78467 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78467' 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 78467 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 78467 00:31:21.921 17:09:23 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:31:36.805 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='119520 00:31:36.806 122095 00:31:36.806 122172 00:31:36.806 123252' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='119520 00:31:36.806 122095 00:31:36.806 122172 00:31:36.806 123252' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:31:36.806 entries numbers from trace record are: 119520 122095 122172 123252 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 119520 122095 122172 123252 00:31:36.806 entries numbers from trace tool are: 119520 122095 122172 123252 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 119520 122095 122172 123252 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 119520 -le 4096 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 119520 -ne 119520 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 122095 -le 4096 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 122095 -ne 122095 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 122172 -le 4096 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 122172 -ne 122172 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 123252 -le 4096 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 123252 -ne 123252 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:36.806 00:31:36.806 real 0m28.272s 00:31:36.806 user 1m11.136s 00:31:36.806 sys 0m4.412s 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:31:36.806 ************************************ 00:31:36.806 END TEST iscsi_tgt_trace_record 00:31:36.806 ************************************ 00:31:36.806 17:09:38 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:31:36.806 17:09:38 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:31:36.806 17:09:38 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.806 17:09:38 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.806 17:09:38 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:36.806 ************************************ 00:31:36.806 START TEST iscsi_tgt_login_redirection 00:31:36.806 ************************************ 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:31:36.806 * Looking for test storage... 00:31:36.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=79377 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 79377' 00:31:36.806 Process pid: 79377 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=79378 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 79378' 00:31:36.806 Process pid: 79378 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 79377 /var/tmp/spdk0.sock 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 79377 ']' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.806 17:09:38 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:31:37.069 [2024-07-22 17:09:38.519106] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:37.069 [2024-07-22 17:09:38.519335] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:37.069 [2024-07-22 17:09:38.522677] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:37.069 [2024-07-22 17:09:38.522918] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.327 [2024-07-22 17:09:38.693712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.327 [2024-07-22 17:09:38.704884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.585 [2024-07-22 17:09:39.011039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.585 [2024-07-22 17:09:39.026775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.842 17:09:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.842 17:09:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:31:37.842 17:09:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:31:38.408 17:09:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:31:39.344 iscsi_tgt_1 is listening. 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 79378 /var/tmp/spdk1.sock 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 79378 ']' 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.344 17:09:40 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:31:39.601 17:09:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:39.601 17:09:41 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:31:39.601 17:09:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:31:40.166 17:09:41 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:31:41.100 iscsi_tgt_2 is listening. 00:31:41.100 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:31:41.100 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:31:41.100 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:41.100 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:31:41.100 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:41.358 17:09:42 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:31:41.923 17:09:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:31:41.923 Null0 00:31:41.923 17:09:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:31:42.489 17:09:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:31:42.489 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:31:42.748 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:31:43.007 Null0 00:31:43.007 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:31:43.265 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:31:43.265 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:31:43.265 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:31:43.265 [2024-07-22 17:09:44.803259] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=79494 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:31:43.265 FIO pid: 79494 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 79494' 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:31:43.265 17:09:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:31:43.265 [global] 00:31:43.265 thread=1 00:31:43.265 invalidate=1 00:31:43.265 rw=randrw 00:31:43.265 time_based=1 00:31:43.265 runtime=15 00:31:43.265 ioengine=libaio 00:31:43.265 direct=1 00:31:43.265 bs=512 00:31:43.265 iodepth=1 00:31:43.265 norandommap=1 00:31:43.265 numjobs=1 00:31:43.265 00:31:43.265 [job0] 00:31:43.265 filename=/dev/sda 00:31:43.265 queue_depth set to 113 (sda) 00:31:43.524 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:31:43.524 fio-3.35 00:31:43.524 Starting 1 thread 00:31:43.524 [2024-07-22 17:09:44.988005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:43.524 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:31:43.524 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:31:43.524 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:31:44.091 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:31:44.091 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:31:44.349 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:31:44.607 17:09:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:31:49.875 17:09:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:31:49.875 17:09:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:31:49.875 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:31:49.875 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:31:49.875 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:31:50.132 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:31:50.132 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:31:50.390 17:09:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:31:50.648 17:09:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:31:55.906 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:31:55.906 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:31:55.906 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:31:55.906 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:31:55.906 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:31:56.164 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:31:56.164 17:09:57 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 79494 00:31:58.693 [2024-07-22 17:10:00.095727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:31:58.693 00:31:58.693 job0: (groupid=0, jobs=1): err= 0: pid=79527: Mon Jul 22 17:10:00 2024 00:31:58.693 read: IOPS=3567, BW=1784KiB/s (1827kB/s)(26.1MiB/15001msec) 00:31:58.693 slat (usec): min=4, max=261, avg= 7.47, stdev= 4.14 00:31:58.693 clat (usec): min=3, max=2008.1k, avg=130.33, stdev=8679.90 00:31:58.693 lat (usec): min=84, max=2008.2k, avg=137.79, stdev=8679.98 00:31:58.693 clat percentiles (usec): 00:31:58.693 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:31:58.693 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 92], 00:31:58.693 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 112], 00:31:58.693 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 198], 99.95th=[ 265], 00:31:58.693 | 99.99th=[ 955] 00:31:58.693 bw ( KiB/s): min= 653, max= 2626, per=100.00%, avg=2326.50, stdev=421.20, samples=22 00:31:58.693 iops : min= 1306, max= 5252, avg=4653.00, stdev=842.39, samples=22 00:31:58.693 write: IOPS=3554, BW=1777KiB/s (1820kB/s)(26.0MiB/15001msec); 0 zone resets 00:31:58.693 slat (usec): min=4, max=278, avg= 7.37, stdev= 4.46 00:31:58.693 clat (usec): min=3, max=2007.0k, avg=133.27, stdev=8690.93 00:31:58.693 lat (usec): min=86, max=2007.0k, avg=140.64, stdev=8691.04 00:31:58.693 clat percentiles (usec): 00:31:58.693 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:31:58.693 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 95], 00:31:58.693 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 115], 00:31:58.693 | 99.00th=[ 129], 99.50th=[ 137], 99.90th=[ 198], 99.95th=[ 277], 00:31:58.693 | 99.99th=[ 2024] 00:31:58.693 bw ( KiB/s): min= 614, max= 2611, per=100.00%, avg=2317.73, stdev=422.05, samples=22 00:31:58.693 iops : min= 1228, max= 5222, avg=4635.45, stdev=844.11, samples=22 00:31:58.693 lat (usec) : 4=0.02%, 10=0.01%, 50=0.02%, 100=81.80%, 250=18.10% 00:31:58.693 lat (usec) : 500=0.05%, 750=0.01%, 1000=0.01% 00:31:58.693 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:31:58.693 cpu : usr=2.42%, sys=5.99%, ctx=107591, majf=0, minf=1 00:31:58.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.693 issued rwts: total=53520,53323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:58.693 00:31:58.693 Run status group 0 (all jobs): 00:31:58.693 READ: bw=1784KiB/s (1827kB/s), 1784KiB/s-1784KiB/s (1827kB/s-1827kB/s), io=26.1MiB (27.4MB), run=15001-15001msec 00:31:58.693 WRITE: bw=1777KiB/s (1820kB/s), 1777KiB/s-1777KiB/s (1820kB/s-1820kB/s), io=26.0MiB (27.3MB), run=15001-15001msec 00:31:58.693 00:31:58.693 Disk stats (read/write): 00:31:58.693 sda: ios=53049/52813, merge=0/0, ticks=6909/7031, in_queue=13941, util=99.43% 00:31:58.693 Cleaning up iSCSI connection 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:58.693 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:31:58.693 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 79377 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 79377 ']' 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 79377 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79377 00:31:58.693 killing process with pid 79377 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79377' 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 79377 00:31:58.693 17:10:00 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 79377 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 79378 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 79378 ']' 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 79378 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79378 00:32:01.224 killing process with pid 79378 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79378' 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 79378 00:32:01.224 17:10:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 79378 00:32:03.754 17:10:04 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:32:03.754 17:10:04 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:03.754 00:32:03.754 real 0m26.691s 00:32:03.754 user 0m51.443s 00:32:03.754 sys 0m5.991s 00:32:03.754 17:10:04 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.754 17:10:04 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:32:03.754 ************************************ 00:32:03.754 END TEST iscsi_tgt_login_redirection 00:32:03.754 ************************************ 00:32:03.754 17:10:04 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:03.754 17:10:04 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:32:03.754 17:10:04 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:03.754 17:10:04 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.754 17:10:04 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:03.754 ************************************ 00:32:03.754 START TEST iscsi_tgt_digests 00:32:03.754 ************************************ 00:32:03.754 17:10:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:32:03.754 * Looking for test storage... 00:32:03.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=79822 00:32:03.754 Process pid: 79822 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 79822' 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 79822 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 79822 ']' 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.754 17:10:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:03.754 [2024-07-22 17:10:05.245549] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:03.755 [2024-07-22 17:10:05.245773] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79822 ] 00:32:04.013 [2024-07-22 17:10:05.425601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:04.272 [2024-07-22 17:10:05.737320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.272 [2024-07-22 17:10:05.737333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.273 [2024-07-22 17:10:05.737450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.273 [2024-07-22 17:10:05.737481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.531 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.464 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.464 iscsi_tgt is listening. Running tests... 00:32:05.464 17:10:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:32:05.464 17:10:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:32:05.464 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.464 17:10:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.464 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.722 Malloc0 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.722 17:10:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:32:06.655 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:32:06.655 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:32:06.656 iscsiadm: Could not execute operation on all records: invalid parameter' 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:32:06.656 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:06.656 ************************************ 00:32:06.656 START TEST iscsi_tgt_digest 00:32:06.656 ************************************ 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:32:06.656 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:32:06.656 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:32:06.656 [2024-07-22 17:10:08.197946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:32:06.656 17:10:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:32:06.656 [global] 00:32:06.656 thread=1 00:32:06.656 invalidate=1 00:32:06.656 rw=write 00:32:06.656 time_based=1 00:32:06.656 runtime=2 00:32:06.656 ioengine=libaio 00:32:06.656 direct=1 00:32:06.656 bs=512 00:32:06.656 iodepth=1 00:32:06.656 norandommap=1 00:32:06.656 numjobs=1 00:32:06.656 00:32:06.656 [job0] 00:32:06.656 filename=/dev/sda 00:32:06.656 queue_depth set to 113 (sda) 00:32:06.914 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:32:06.914 fio-3.35 00:32:06.914 Starting 1 thread 00:32:06.914 [2024-07-22 17:10:08.377808] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:09.440 [2024-07-22 17:10:10.496825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:09.440 00:32:09.440 job0: (groupid=0, jobs=1): err= 0: pid=79927: Mon Jul 22 17:10:10 2024 00:32:09.440 write: IOPS=7303, BW=3652KiB/s (3739kB/s)(7307KiB/2001msec); 0 zone resets 00:32:09.440 slat (nsec): min=5106, max=64547, avg=8253.31, stdev=2561.76 00:32:09.440 clat (usec): min=109, max=2204, avg=127.68, stdev=26.58 00:32:09.440 lat (usec): min=115, max=2220, avg=135.94, stdev=27.11 00:32:09.440 clat percentiles (usec): 00:32:09.440 | 1.00th=[ 113], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 119], 00:32:09.440 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:32:09.440 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 149], 00:32:09.440 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 318], 99.95th=[ 545], 00:32:09.440 | 99.99th=[ 1205] 00:32:09.441 bw ( KiB/s): min= 3641, max= 3794, per=100.00%, avg=3705.67, stdev=79.20, samples=3 00:32:09.441 iops : min= 7282, max= 7588, avg=7411.33, stdev=158.40, samples=3 00:32:09.441 lat (usec) : 250=99.82%, 500=0.13%, 750=0.03% 00:32:09.441 lat (msec) : 2=0.01%, 4=0.01% 00:32:09.441 cpu : usr=1.85%, sys=8.10%, ctx=14615, majf=0, minf=1 00:32:09.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.441 issued rwts: total=0,14614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.441 00:32:09.441 Run status group 0 (all jobs): 00:32:09.441 WRITE: bw=3652KiB/s (3739kB/s), 3652KiB/s-3652KiB/s (3739kB/s-3739kB/s), io=7307KiB (7482kB), run=2001-2001msec 00:32:09.441 00:32:09.441 Disk stats (read/write): 00:32:09.441 sda: ios=41/13762, merge=0/0, ticks=10/1752, in_queue=1762, util=95.35% 00:32:09.441 17:10:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:32:09.441 [global] 00:32:09.441 thread=1 00:32:09.441 invalidate=1 00:32:09.441 rw=read 00:32:09.441 time_based=1 00:32:09.441 runtime=2 00:32:09.441 ioengine=libaio 00:32:09.441 direct=1 00:32:09.441 bs=512 00:32:09.441 iodepth=1 00:32:09.441 norandommap=1 00:32:09.441 numjobs=1 00:32:09.441 00:32:09.441 [job0] 00:32:09.441 filename=/dev/sda 00:32:09.441 queue_depth set to 113 (sda) 00:32:09.441 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:32:09.441 fio-3.35 00:32:09.441 Starting 1 thread 00:32:11.344 00:32:11.344 job0: (groupid=0, jobs=1): err= 0: pid=79981: Mon Jul 22 17:10:12 2024 00:32:11.344 read: IOPS=8126, BW=4063KiB/s (4161kB/s)(8131KiB/2001msec) 00:32:11.344 slat (nsec): min=4720, max=84748, avg=8337.81, stdev=3387.85 00:32:11.344 clat (usec): min=93, max=2732, avg=113.67, stdev=30.06 00:32:11.344 lat (usec): min=100, max=2740, avg=122.01, stdev=30.93 00:32:11.344 clat percentiles (usec): 00:32:11.344 | 1.00th=[ 100], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 105], 00:32:11.344 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:32:11.344 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 135], 00:32:11.344 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 212], 99.95th=[ 449], 00:32:11.344 | 99.99th=[ 1844] 00:32:11.344 bw ( KiB/s): min= 3656, max= 4225, per=98.68%, avg=4010.00, stdev=308.92, samples=3 00:32:11.344 iops : min= 7312, max= 8450, avg=8020.00, stdev=617.84, samples=3 00:32:11.344 lat (usec) : 100=1.62%, 250=98.30%, 500=0.03%, 750=0.02%, 1000=0.02% 00:32:11.344 lat (msec) : 2=0.01%, 4=0.01% 00:32:11.344 cpu : usr=3.15%, sys=8.40%, ctx=16263, majf=0, minf=1 00:32:11.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.344 issued rwts: total=16262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:11.344 00:32:11.344 Run status group 0 (all jobs): 00:32:11.344 READ: bw=4063KiB/s (4161kB/s), 4063KiB/s-4063KiB/s (4161kB/s-4161kB/s), io=8131KiB (8326kB), run=2001-2001msec 00:32:11.344 00:32:11.344 Disk stats (read/write): 00:32:11.344 sda: ios=15329/0, merge=0/0, ticks=1732/0, in_queue=1732, util=95.12% 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:32:11.344 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:32:11.344 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:32:11.344 iscsiadm: No active sessions. 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:32:11.344 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:32:11.344 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:32:11.344 [2024-07-22 17:10:12.934596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:32:11.344 17:10:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:32:11.603 [global] 00:32:11.603 thread=1 00:32:11.603 invalidate=1 00:32:11.603 rw=write 00:32:11.603 time_based=1 00:32:11.603 runtime=2 00:32:11.603 ioengine=libaio 00:32:11.603 direct=1 00:32:11.603 bs=512 00:32:11.603 iodepth=1 00:32:11.603 norandommap=1 00:32:11.603 numjobs=1 00:32:11.603 00:32:11.603 [job0] 00:32:11.603 filename=/dev/sda 00:32:11.603 queue_depth set to 113 (sda) 00:32:11.603 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:32:11.603 fio-3.35 00:32:11.603 Starting 1 thread 00:32:11.603 [2024-07-22 17:10:13.113035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:14.130 [2024-07-22 17:10:15.222971] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:32:14.130 00:32:14.130 job0: (groupid=0, jobs=1): err= 0: pid=80051: Mon Jul 22 17:10:15 2024 00:32:14.130 write: IOPS=6946, BW=3473KiB/s (3556kB/s)(6950KiB/2001msec); 0 zone resets 00:32:14.130 slat (usec): min=4, max=254, avg= 8.25, stdev= 3.72 00:32:14.130 clat (usec): min=4, max=2667, avg=134.54, stdev=37.27 00:32:14.130 lat (usec): min=115, max=2675, avg=142.79, stdev=38.22 00:32:14.130 clat percentiles (usec): 00:32:14.130 | 1.00th=[ 115], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 121], 00:32:14.130 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:32:14.130 | 70.00th=[ 135], 80.00th=[ 145], 90.00th=[ 163], 95.00th=[ 180], 00:32:14.130 | 99.00th=[ 210], 99.50th=[ 237], 99.90th=[ 347], 99.95th=[ 445], 00:32:14.130 | 99.99th=[ 2442] 00:32:14.130 bw ( KiB/s): min= 2906, max= 3709, per=98.50%, avg=3421.67, stdev=447.55, samples=3 00:32:14.130 iops : min= 5812, max= 7418, avg=6843.33, stdev=895.11, samples=3 00:32:14.130 lat (usec) : 10=0.01%, 50=0.01%, 100=0.03%, 250=99.69%, 500=0.23% 00:32:14.130 lat (usec) : 750=0.01%, 1000=0.01% 00:32:14.130 lat (msec) : 4=0.01% 00:32:14.130 cpu : usr=2.60%, sys=7.15%, ctx=13906, majf=0, minf=1 00:32:14.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.130 issued rwts: total=0,13899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.130 00:32:14.130 Run status group 0 (all jobs): 00:32:14.130 WRITE: bw=3473KiB/s (3556kB/s), 3473KiB/s-3473KiB/s (3556kB/s-3556kB/s), io=6950KiB (7116kB), run=2001-2001msec 00:32:14.130 00:32:14.130 Disk stats (read/write): 00:32:14.130 sda: ios=48/13060, merge=0/0, ticks=11/1759, in_queue=1771, util=95.47% 00:32:14.130 17:10:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:32:14.130 [global] 00:32:14.130 thread=1 00:32:14.130 invalidate=1 00:32:14.130 rw=read 00:32:14.130 time_based=1 00:32:14.130 runtime=2 00:32:14.130 ioengine=libaio 00:32:14.130 direct=1 00:32:14.130 bs=512 00:32:14.130 iodepth=1 00:32:14.130 norandommap=1 00:32:14.130 numjobs=1 00:32:14.130 00:32:14.130 [job0] 00:32:14.130 filename=/dev/sda 00:32:14.130 queue_depth set to 113 (sda) 00:32:14.130 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:32:14.130 fio-3.35 00:32:14.130 Starting 1 thread 00:32:16.038 00:32:16.038 job0: (groupid=0, jobs=1): err= 0: pid=80100: Mon Jul 22 17:10:17 2024 00:32:16.038 read: IOPS=8554, BW=4277KiB/s (4380kB/s)(8559KiB/2001msec) 00:32:16.038 slat (nsec): min=5041, max=63334, avg=6960.46, stdev=1921.47 00:32:16.038 clat (usec): min=82, max=404, avg=108.76, stdev=10.57 00:32:16.038 lat (usec): min=95, max=465, avg=115.72, stdev=10.97 00:32:16.038 clat percentiles (usec): 00:32:16.038 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 103], 00:32:16.038 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 109], 00:32:16.038 | 70.00th=[ 111], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 126], 00:32:16.038 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 180], 99.95th=[ 281], 00:32:16.038 | 99.99th=[ 396] 00:32:16.038 bw ( KiB/s): min= 4253, max= 4318, per=100.00%, avg=4283.00, stdev=32.79, samples=3 00:32:16.038 iops : min= 8506, max= 8636, avg=8566.00, stdev=65.57, samples=3 00:32:16.038 lat (usec) : 100=7.73%, 250=92.21%, 500=0.06% 00:32:16.038 cpu : usr=2.90%, sys=7.95%, ctx=17147, majf=0, minf=1 00:32:16.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.038 issued rwts: total=17117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.038 00:32:16.038 Run status group 0 (all jobs): 00:32:16.038 READ: bw=4277KiB/s (4380kB/s), 4277KiB/s-4277KiB/s (4380kB/s-4380kB/s), io=8559KiB (8764kB), run=2001-2001msec 00:32:16.038 00:32:16.038 Disk stats (read/write): 00:32:16.038 sda: ios=16189/0, merge=0/0, ticks=1729/0, in_queue=1729, util=95.08% 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:32:16.038 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:32:16.038 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:32:16.038 iscsiadm: No active sessions. 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:32:16.038 ************************************ 00:32:16.038 END TEST iscsi_tgt_digest 00:32:16.038 ************************************ 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:32:16.038 00:32:16.038 real 0m9.450s 00:32:16.038 user 0m0.700s 00:32:16.038 sys 0m0.967s 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:32:16.038 Cleaning up iSCSI connection 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1142 -- # return 0 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:32:16.038 iscsiadm: No matching sessions found 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:32:16.038 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 79822 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 79822 ']' 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 79822 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79822 00:32:16.297 killing process with pid 79822 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79822' 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 79822 00:32:16.297 17:10:17 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 79822 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:18.828 00:32:18.828 real 0m15.273s 00:32:18.828 user 0m53.842s 00:32:18.828 sys 0m3.705s 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:32:18.828 ************************************ 00:32:18.828 END TEST iscsi_tgt_digests 00:32:18.828 ************************************ 00:32:18.828 17:10:20 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:18.828 17:10:20 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:32:18.828 17:10:20 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:18.828 17:10:20 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.828 17:10:20 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:18.828 ************************************ 00:32:18.828 START TEST iscsi_tgt_fuzz 00:32:18.828 ************************************ 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:32:18.828 * Looking for test storage... 00:32:18.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:18.828 Process iscsipid: 80223 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=80223 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 80223' 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 80223 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 80223 ']' 00:32:18.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.828 17:10:20 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.207 17:10:21 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 iscsi_tgt is listening. Running tests... 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 Malloc0 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.141 17:10:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:32:22.126 17:10:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:22.126 17:10:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:32:54.238 Fuzzing completed. Shutting down the fuzz application. 00:32:54.238 00:32:54.238 device 0x6110000160c0 stats: Sent 8610 valid opcode PDUs, 79832 invalid opcode PDUs. 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 80223 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 80223 ']' 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 80223 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80223 00:32:54.238 killing process with pid 80223 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80223' 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 80223 00:32:54.238 17:10:54 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 80223 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:56.140 00:32:56.140 real 0m37.060s 00:32:56.140 user 3m23.858s 00:32:56.140 sys 0m16.495s 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.140 ************************************ 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:56.140 END TEST iscsi_tgt_fuzz 00:32:56.140 ************************************ 00:32:56.140 17:10:57 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:32:56.140 17:10:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:32:56.140 17:10:57 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:56.140 17:10:57 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.140 17:10:57 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:32:56.140 ************************************ 00:32:56.140 START TEST iscsi_tgt_multiconnection 00:32:56.140 ************************************ 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:32:56.140 * Looking for test storage... 00:32:56.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:56.140 iSCSI target launched. pid: 80674 00:32:56.140 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=80674 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 80674' 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 80674 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 80674 ']' 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.141 17:10:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:56.141 [2024-07-22 17:10:57.693309] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:56.141 [2024-07-22 17:10:57.693530] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80674 ] 00:32:56.399 [2024-07-22 17:10:57.870133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.657 [2024-07-22 17:10:58.131364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.224 17:10:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.224 17:10:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:32:57.224 17:10:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:32:57.224 17:10:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:32:58.600 17:10:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:58.600 17:10:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:32:58.858 17:11:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:32:58.858 17:11:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.858 17:11:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:59.128 17:11:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:32:59.386 17:11:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:32:59.643 Creating an iSCSI target node. 00:32:59.643 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:32:59.643 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:32:59.901 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:00.159 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:00.159 { 00:33:00.159 "uuid": "b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f", 00:33:00.159 "name": "lvs0", 00:33:00.159 "base_bdev": "Nvme0n1", 00:33:00.159 "total_data_clusters": 5099, 00:33:00.159 "free_clusters": 5099, 00:33:00.159 "block_size": 4096, 00:33:00.159 "cluster_size": 1048576 00:33:00.159 } 00:33:00.159 ]' 00:33:00.159 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f") .free_clusters' 00:33:00.159 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:33:00.159 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f") .cluster_size' 00:33:00.418 5099 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:00.418 17:11:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_1 169 00:33:00.676 30419539-1223-4074-8543-cff1e2668e5c 00:33:00.676 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:00.676 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_2 169 00:33:00.933 0009660e-b0b6-473c-91ba-7f577060d198 00:33:00.933 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:00.934 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_3 169 00:33:01.191 4ee9a997-dfaa-4de5-884d-525354b9295a 00:33:01.191 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:01.191 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_4 169 00:33:01.449 cf0e9369-094b-487e-9754-f9935a13c191 00:33:01.449 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:01.449 17:11:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_5 169 00:33:01.707 0aae2362-d634-41c8-8e0a-d8774cedc53b 00:33:01.707 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:01.707 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_6 169 00:33:01.965 07bfa119-590f-4a3f-8140-f112c29f3e9b 00:33:01.965 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:01.965 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_7 169 00:33:02.223 3cf7bac2-6844-4787-b399-0e88841d1171 00:33:02.223 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:02.223 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_8 169 00:33:02.482 7b19d1c9-ea89-452a-b98d-420e482a979f 00:33:02.482 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:02.482 17:11:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_9 169 00:33:02.482 f7465834-ec12-455b-b0a7-3d31ccfd4724 00:33:02.482 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:02.482 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_10 169 00:33:02.741 b65b887f-acf8-485b-a7fa-b114ed894919 00:33:02.741 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:02.741 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_11 169 00:33:02.999 6b7a10f8-161f-4391-a016-593ae2e7440a 00:33:02.999 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:02.999 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_12 169 00:33:03.257 81b0a6f9-1f45-44f9-b875-50354d770354 00:33:03.257 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:03.257 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_13 169 00:33:03.516 6d6a605b-8731-4364-a600-760f6465bb68 00:33:03.516 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:03.516 17:11:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_14 169 00:33:03.774 deacbcd7-52a8-46a6-a15f-5f36b3f10acd 00:33:03.774 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:03.774 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_15 169 00:33:04.033 84ffd593-e214-45c0-9608-c0ea07e5b995 00:33:04.033 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:04.033 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_16 169 00:33:04.291 7c7bda96-111c-4e88-bdd0-06c9a3f4e489 00:33:04.291 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:04.291 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_17 169 00:33:04.550 715c51ba-c719-4dea-854b-80a123ce44b4 00:33:04.550 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:04.550 17:11:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_18 169 00:33:04.808 f6924071-26a5-4a76-9b94-c39582462c27 00:33:04.808 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:04.808 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_19 169 00:33:04.808 ae9fac51-3c62-4805-8723-efb0eda2bd4e 00:33:04.808 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:04.808 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_20 169 00:33:05.067 c5d0d8c5-cc9d-402f-95be-0d60bc59e5be 00:33:05.067 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:05.067 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_21 169 00:33:05.325 0bd7abc7-4d7d-4708-b87a-1328a4418d50 00:33:05.325 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:05.325 17:11:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_22 169 00:33:05.584 47a92d6a-da30-4f85-9a90-2e7e27f35788 00:33:05.584 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:05.584 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_23 169 00:33:05.843 06f463d4-c2ba-4c96-bd42-5ecf70a6e82d 00:33:05.843 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:05.843 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_24 169 00:33:06.101 a250665d-44b3-4405-95ab-01aba0e470b4 00:33:06.101 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:06.101 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_25 169 00:33:06.360 a6d6ee40-0e05-4d83-872f-c9552034dedf 00:33:06.360 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:06.360 17:11:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_26 169 00:33:06.619 064f7869-2a34-42de-a697-10820b2621ce 00:33:06.619 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:06.619 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_27 169 00:33:06.877 714ce6e8-aabb-4cd4-8965-4621ff8ab67b 00:33:06.877 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:06.877 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_28 169 00:33:07.136 d6ec458e-0a07-45ba-ae26-543872ba18d7 00:33:07.136 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:07.136 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_29 169 00:33:07.395 8245dccf-2e25-4c0c-a3ff-0bb91f098876 00:33:07.395 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:07.395 17:11:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3c9b6e3-d3f5-49d7-a7a7-dd71b97e793f lbd_30 169 00:33:07.654 4e0d43b2-39ac-41a3-8f81-4442376c1907 00:33:07.654 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:33:07.654 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:07.654 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:33:07.654 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:33:07.912 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:07.912 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:33:07.912 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:33:08.170 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:08.170 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:33:08.170 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:33:08.428 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:08.428 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:33:08.428 17:11:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:33:08.686 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:08.686 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:33:08.686 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:33:08.945 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:08.945 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:33:08.945 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:33:09.203 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:09.203 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:33:09.204 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:33:09.462 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:09.462 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:33:09.462 17:11:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:33:09.723 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:09.723 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:33:09.723 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:33:09.981 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:09.981 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:33:09.981 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:33:10.240 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:10.240 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:33:10.240 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:33:10.498 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:10.498 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:33:10.498 17:11:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:33:10.756 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:10.756 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:33:10.756 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:33:11.015 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:11.015 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:33:11.015 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:33:11.272 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:11.272 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:33:11.272 17:11:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:33:11.531 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:11.531 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:33:11.531 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:33:11.788 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:11.788 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:33:11.788 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:33:12.050 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:12.050 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:33:12.050 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:33:12.308 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:12.308 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:33:12.308 17:11:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:33:12.566 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:12.566 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:33:12.566 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:33:12.825 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:12.825 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:33:12.825 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:33:13.083 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:13.083 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:33:13.083 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:33:13.342 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:13.342 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:33:13.342 17:11:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:33:13.600 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:13.600 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:33:13.600 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:33:13.858 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:13.858 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:33:13.858 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:33:14.117 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:14.117 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:33:14.117 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:33:14.419 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:14.419 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:33:14.419 17:11:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:33:14.678 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:14.678 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:33:14.678 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:33:14.936 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:14.936 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:33:14.936 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:33:15.195 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:15.195 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:33:15.195 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:33:15.453 17:11:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:33:16.388 Logging into iSCSI target. 00:33:16.388 17:11:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:33:16.388 17:11:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:33:16.388 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:33:16.388 17:11:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:33:16.646 [2024-07-22 17:11:18.024946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.646 [2024-07-22 17:11:18.047949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.646 [2024-07-22 17:11:18.053395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.646 [2024-07-22 17:11:18.064602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.646 [2024-07-22 17:11:18.087731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.646 [2024-07-22 17:11:18.094249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.647 [2024-07-22 17:11:18.122513] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.647 [2024-07-22 17:11:18.168151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.647 [2024-07-22 17:11:18.204917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.647 [2024-07-22 17:11:18.228399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.911 [2024-07-22 17:11:18.268933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.911 [2024-07-22 17:11:18.290727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.911 [2024-07-22 17:11:18.311004] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:33:16.911 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:33:16.911 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:33:16.912 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:33:16.912 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:33:16.912 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:33:16.912 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-22 17:11:18.339734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.912 [2024-07-22 17:11:18.364702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.912 [2024-07-22 17:11:18.399961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.912 [2024-07-22 17:11:18.440228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:16.912 [2024-07-22 17:11:18.487158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.187 [2024-07-22 17:11:18.524125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.187 [2024-07-22 17:11:18.552434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.187 [2024-07-22 17:11:18.601239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.187 [2024-07-22 17:11:18.635356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.187 [2024-07-22 17:11:18.669855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.188 [2024-07-22 17:11:18.722668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.188 [2024-07-22 17:11:18.752392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.188 [2024-07-22 17:11:18.794851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.446 [2024-07-22 17:11:18.827358] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.446 [2024-07-22 17:11:18.885421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.446 tal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:33:17.446 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:33:17.446 [2024-07-22 17:11:18.925488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.446 [2024-07-22 17:11:18.929790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:33:17.446 Running FIO 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:33:17.446 17:11:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:33:17.705 [global] 00:33:17.705 thread=1 00:33:17.705 invalidate=1 00:33:17.705 rw=randrw 00:33:17.705 time_based=1 00:33:17.705 runtime=5 00:33:17.705 ioengine=libaio 00:33:17.705 direct=1 00:33:17.705 bs=131072 00:33:17.705 iodepth=64 00:33:17.705 norandommap=1 00:33:17.705 numjobs=1 00:33:17.705 00:33:17.705 [job0] 00:33:17.705 filename=/dev/sda 00:33:17.705 [job1] 00:33:17.705 filename=/dev/sdb 00:33:17.705 [job2] 00:33:17.705 filename=/dev/sdc 00:33:17.705 [job3] 00:33:17.705 filename=/dev/sdd 00:33:17.705 [job4] 00:33:17.705 filename=/dev/sde 00:33:17.705 [job5] 00:33:17.705 filename=/dev/sdf 00:33:17.705 [job6] 00:33:17.705 filename=/dev/sdg 00:33:17.705 [job7] 00:33:17.705 filename=/dev/sdh 00:33:17.705 [job8] 00:33:17.705 filename=/dev/sdi 00:33:17.705 [job9] 00:33:17.705 filename=/dev/sdj 00:33:17.705 [job10] 00:33:17.705 filename=/dev/sdk 00:33:17.705 [job11] 00:33:17.705 filename=/dev/sdl 00:33:17.705 [job12] 00:33:17.705 filename=/dev/sdm 00:33:17.705 [job13] 00:33:17.705 filename=/dev/sdn 00:33:17.705 [job14] 00:33:17.705 filename=/dev/sdo 00:33:17.705 [job15] 00:33:17.705 filename=/dev/sdp 00:33:17.705 [job16] 00:33:17.705 filename=/dev/sdq 00:33:17.705 [job17] 00:33:17.705 filename=/dev/sdr 00:33:17.705 [job18] 00:33:17.705 filename=/dev/sds 00:33:17.705 [job19] 00:33:17.705 filename=/dev/sdt 00:33:17.705 [job20] 00:33:17.705 filename=/dev/sdu 00:33:17.705 [job21] 00:33:17.705 filename=/dev/sdv 00:33:17.705 [job22] 00:33:17.705 filename=/dev/sdw 00:33:17.705 [job23] 00:33:17.705 filename=/dev/sdx 00:33:17.705 [job24] 00:33:17.705 filename=/dev/sdy 00:33:17.705 [job25] 00:33:17.705 filename=/dev/sdz 00:33:17.705 [job26] 00:33:17.705 filename=/dev/sdaa 00:33:17.705 [job27] 00:33:17.705 filename=/dev/sdab 00:33:17.705 [job28] 00:33:17.705 filename=/dev/sdac 00:33:17.705 [job29] 00:33:17.705 filename=/dev/sdad 00:33:17.963 queue_depth set to 113 (sda) 00:33:18.221 queue_depth set to 113 (sdb) 00:33:18.221 queue_depth set to 113 (sdc) 00:33:18.221 queue_depth set to 113 (sdd) 00:33:18.221 queue_depth set to 113 (sde) 00:33:18.221 queue_depth set to 113 (sdf) 00:33:18.221 queue_depth set to 113 (sdg) 00:33:18.221 queue_depth set to 113 (sdh) 00:33:18.221 queue_depth set to 113 (sdi) 00:33:18.221 queue_depth set to 113 (sdj) 00:33:18.221 queue_depth set to 113 (sdk) 00:33:18.221 queue_depth set to 113 (sdl) 00:33:18.480 queue_depth set to 113 (sdm) 00:33:18.480 queue_depth set to 113 (sdn) 00:33:18.480 queue_depth set to 113 (sdo) 00:33:18.480 queue_depth set to 113 (sdp) 00:33:18.480 queue_depth set to 113 (sdq) 00:33:18.480 queue_depth set to 113 (sdr) 00:33:18.480 queue_depth set to 113 (sds) 00:33:18.480 queue_depth set to 113 (sdt) 00:33:18.480 queue_depth set to 113 (sdu) 00:33:18.480 queue_depth set to 113 (sdv) 00:33:18.480 queue_depth set to 113 (sdw) 00:33:18.738 queue_depth set to 113 (sdx) 00:33:18.738 queue_depth set to 113 (sdy) 00:33:18.738 queue_depth set to 113 (sdz) 00:33:18.738 queue_depth set to 113 (sdaa) 00:33:18.738 queue_depth set to 113 (sdab) 00:33:18.738 queue_depth set to 113 (sdac) 00:33:18.738 queue_depth set to 113 (sdad) 00:33:18.996 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:33:18.996 fio-3.35 00:33:18.996 Starting 30 threads 00:33:18.996 [2024-07-22 17:11:20.412861] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.996 [2024-07-22 17:11:20.417878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.996 [2024-07-22 17:11:20.422133] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.996 [2024-07-22 17:11:20.424978] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.428005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.430707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.433418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.436398] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.439569] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.442550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.445409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.448263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.451071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.453885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.456629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.459456] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.462390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.465027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.467859] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.470571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.473296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.476085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.478844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.481847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.484722] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.488528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.492739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.496270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.499768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:18.997 [2024-07-22 17:11:20.502913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.618812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.641961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.645763] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.649140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.651499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.653857] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.656189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.658695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.660965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.663342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.665617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.667950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.670460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.676598] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.679221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.681770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.684329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.686717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.690198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 [2024-07-22 17:11:26.694149] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.564 00:33:25.564 job0: (groupid=0, jobs=1): err= 0: pid=81624: Mon Jul 22 17:11:26 2024 00:33:25.564 read: IOPS=54, BW=6934KiB/s (7100kB/s)(37.8MiB/5575msec) 00:33:25.564 slat (usec): min=9, max=1868, avg=56.91, stdev=170.90 00:33:25.564 clat (msec): min=24, max=595, avg=83.20, stdev=50.17 00:33:25.564 lat (msec): min=24, max=597, avg=83.26, stdev=50.21 00:33:25.564 clat percentiles (msec): 00:33:25.564 | 1.00th=[ 42], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 65], 00:33:25.564 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.564 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 140], 95.00th=[ 188], 00:33:25.564 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 600], 99.95th=[ 600], 00:33:25.564 | 99.99th=[ 600] 00:33:25.564 bw ( KiB/s): min= 256, max=13312, per=3.06%, avg=7006.00, stdev=3541.67, samples=11 00:33:25.564 iops : min= 2, max= 104, avg=54.73, stdev=27.67, samples=11 00:33:25.564 write: IOPS=59, BW=7623KiB/s (7806kB/s)(41.5MiB/5575msec); 0 zone resets 00:33:25.564 slat (usec): min=14, max=1678, avg=57.55, stdev=125.55 00:33:25.564 clat (msec): min=261, max=1585, avg=996.98, stdev=184.11 00:33:25.564 lat (msec): min=261, max=1585, avg=997.03, stdev=184.11 00:33:25.564 clat percentiles (msec): 00:33:25.564 | 1.00th=[ 330], 5.00th=[ 617], 10.00th=[ 751], 20.00th=[ 969], 00:33:25.564 | 30.00th=[ 1003], 40.00th=[ 1020], 50.00th=[ 1028], 60.00th=[ 1036], 00:33:25.564 | 70.00th=[ 1045], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1301], 00:33:25.564 | 99.00th=[ 1536], 99.50th=[ 1569], 99.90th=[ 1586], 99.95th=[ 1586], 00:33:25.564 | 99.99th=[ 1586] 00:33:25.564 bw ( KiB/s): min= 2048, max= 7680, per=3.06%, avg=6913.40, stdev=1719.89, samples=10 00:33:25.564 iops : min= 16, max= 60, avg=54.00, stdev=13.43, samples=10 00:33:25.564 lat (msec) : 50=0.95%, 100=39.75%, 250=6.15%, 500=1.89%, 750=3.79% 00:33:25.565 lat (msec) : 1000=10.41%, 2000=37.07% 00:33:25.565 cpu : usr=0.14%, sys=0.43%, ctx=429, majf=0, minf=1 00:33:25.565 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.565 issued rwts: total=302,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.565 job1: (groupid=0, jobs=1): err= 0: pid=81625: Mon Jul 22 17:11:26 2024 00:33:25.565 read: IOPS=53, BW=6821KiB/s (6985kB/s)(37.2MiB/5592msec) 00:33:25.565 slat (usec): min=7, max=907, avg=36.09, stdev=55.73 00:33:25.565 clat (msec): min=9, max=302, avg=82.03, stdev=43.38 00:33:25.565 lat (msec): min=9, max=302, avg=82.07, stdev=43.37 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 28], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.565 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.565 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 138], 95.00th=[ 180], 00:33:25.565 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:33:25.565 | 99.99th=[ 305] 00:33:25.565 bw ( KiB/s): min= 256, max=13312, per=3.03%, avg=6933.82, stdev=3248.70, samples=11 00:33:25.565 iops : min= 2, max= 104, avg=54.09, stdev=25.36, samples=11 00:33:25.565 write: IOPS=59, BW=7599KiB/s (7782kB/s)(41.5MiB/5592msec); 0 zone resets 00:33:25.565 slat (usec): min=7, max=941, avg=44.42, stdev=63.09 00:33:25.565 clat (msec): min=239, max=1575, avg=1002.25, stdev=185.89 00:33:25.565 lat (msec): min=239, max=1575, avg=1002.30, stdev=185.90 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 321], 5.00th=[ 634], 10.00th=[ 776], 20.00th=[ 978], 00:33:25.565 | 30.00th=[ 1003], 40.00th=[ 1020], 50.00th=[ 1028], 60.00th=[ 1036], 00:33:25.565 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1284], 00:33:25.565 | 99.00th=[ 1536], 99.50th=[ 1552], 99.90th=[ 1569], 99.95th=[ 1569], 00:33:25.565 | 99.99th=[ 1569] 00:33:25.565 bw ( KiB/s): min= 2048, max= 7920, per=3.04%, avg=6884.80, stdev=1720.25, samples=10 00:33:25.565 iops : min= 16, max= 61, avg=53.70, stdev=13.38, samples=10 00:33:25.565 lat (msec) : 10=0.16%, 20=0.16%, 50=0.95%, 100=40.16%, 250=5.40% 00:33:25.565 lat (msec) : 500=1.75%, 750=3.65%, 1000=9.37%, 2000=38.41% 00:33:25.565 cpu : usr=0.11%, sys=0.47%, ctx=396, majf=0, minf=1 00:33:25.565 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.565 issued rwts: total=298,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.565 job2: (groupid=0, jobs=1): err= 0: pid=81626: Mon Jul 22 17:11:26 2024 00:33:25.565 read: IOPS=59, BW=7668KiB/s (7852kB/s)(41.6MiB/5559msec) 00:33:25.565 slat (usec): min=11, max=115, avg=34.00, stdev=16.73 00:33:25.565 clat (msec): min=48, max=613, avg=91.11, stdev=71.57 00:33:25.565 lat (msec): min=48, max=613, avg=91.14, stdev=71.57 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 51], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.565 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.565 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 165], 95.00th=[ 234], 00:33:25.565 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.565 | 99.99th=[ 617] 00:33:25.565 bw ( KiB/s): min= 256, max=11776, per=3.35%, avg=7656.73, stdev=3243.14, samples=11 00:33:25.565 iops : min= 2, max= 92, avg=59.82, stdev=25.34, samples=11 00:33:25.565 write: IOPS=59, BW=7575KiB/s (7757kB/s)(41.1MiB/5559msec); 0 zone resets 00:33:25.565 slat (usec): min=16, max=129, avg=38.60, stdev=16.48 00:33:25.565 clat (msec): min=276, max=1542, avg=987.30, stdev=182.38 00:33:25.565 lat (msec): min=276, max=1542, avg=987.34, stdev=182.38 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 397], 5.00th=[ 617], 10.00th=[ 760], 20.00th=[ 936], 00:33:25.565 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.565 | 70.00th=[ 1036], 80.00th=[ 1045], 90.00th=[ 1070], 95.00th=[ 1318], 00:33:25.565 | 99.00th=[ 1502], 99.50th=[ 1502], 99.90th=[ 1536], 99.95th=[ 1536], 00:33:25.565 | 99.99th=[ 1536] 00:33:25.565 bw ( KiB/s): min= 1792, max= 7680, per=3.06%, avg=6912.00, stdev=1806.17, samples=10 00:33:25.565 iops : min= 14, max= 60, avg=54.00, stdev=14.11, samples=10 00:33:25.565 lat (msec) : 50=0.45%, 100=41.69%, 250=6.34%, 500=2.27%, 750=4.23% 00:33:25.565 lat (msec) : 1000=15.86%, 2000=29.15% 00:33:25.565 cpu : usr=0.20%, sys=0.41%, ctx=371, majf=0, minf=1 00:33:25.565 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.565 issued rwts: total=333,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.565 job3: (groupid=0, jobs=1): err= 0: pid=81627: Mon Jul 22 17:11:26 2024 00:33:25.565 read: IOPS=55, BW=7166KiB/s (7338kB/s)(38.9MiB/5555msec) 00:33:25.565 slat (nsec): min=9615, max=94871, avg=31368.15, stdev=14745.99 00:33:25.565 clat (msec): min=44, max=580, avg=93.61, stdev=71.59 00:33:25.565 lat (msec): min=44, max=580, avg=93.64, stdev=71.59 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 51], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 67], 00:33:25.565 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.565 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 169], 95.00th=[ 239], 00:33:25.565 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:33:25.565 | 99.99th=[ 584] 00:33:25.565 bw ( KiB/s): min= 256, max=12544, per=3.12%, avg=7143.45, stdev=3313.32, samples=11 00:33:25.565 iops : min= 2, max= 98, avg=55.73, stdev=25.90, samples=11 00:33:25.565 write: IOPS=59, BW=7581KiB/s (7763kB/s)(41.1MiB/5555msec); 0 zone resets 00:33:25.565 slat (usec): min=12, max=1668, avg=42.87, stdev=92.35 00:33:25.565 clat (msec): min=272, max=1586, avg=990.06, stdev=191.51 00:33:25.565 lat (msec): min=272, max=1586, avg=990.11, stdev=191.51 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 351], 5.00th=[ 600], 10.00th=[ 768], 20.00th=[ 936], 00:33:25.565 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.565 | 70.00th=[ 1036], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1318], 00:33:25.565 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1586], 99.95th=[ 1586], 00:33:25.565 | 99.99th=[ 1586] 00:33:25.565 bw ( KiB/s): min= 2048, max= 7920, per=3.06%, avg=6910.40, stdev=1726.82, samples=10 00:33:25.565 iops : min= 16, max= 61, avg=53.90, stdev=13.44, samples=10 00:33:25.565 lat (msec) : 50=0.31%, 100=38.91%, 250=7.81%, 500=2.19%, 750=4.38% 00:33:25.565 lat (msec) : 1000=13.75%, 2000=32.66% 00:33:25.565 cpu : usr=0.18%, sys=0.38%, ctx=382, majf=0, minf=1 00:33:25.565 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.565 issued rwts: total=311,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.565 job4: (groupid=0, jobs=1): err= 0: pid=81637: Mon Jul 22 17:11:26 2024 00:33:25.565 read: IOPS=59, BW=7593KiB/s (7775kB/s)(41.2MiB/5563msec) 00:33:25.565 slat (usec): min=12, max=2589, avg=66.46, stdev=178.76 00:33:25.565 clat (msec): min=50, max=601, avg=92.99, stdev=60.12 00:33:25.565 lat (msec): min=50, max=601, avg=93.05, stdev=60.14 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.565 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.565 | 70.00th=[ 73], 80.00th=[ 105], 90.00th=[ 165], 95.00th=[ 220], 00:33:25.565 | 99.00th=[ 249], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:33:25.565 | 99.99th=[ 600] 00:33:25.565 bw ( KiB/s): min= 256, max=16416, per=3.34%, avg=7636.36, stdev=3999.01, samples=11 00:33:25.565 iops : min= 2, max= 128, avg=59.64, stdev=31.19, samples=11 00:33:25.565 write: IOPS=59, BW=7616KiB/s (7799kB/s)(41.4MiB/5563msec); 0 zone resets 00:33:25.565 slat (usec): min=12, max=1735, avg=81.13, stdev=174.46 00:33:25.565 clat (msec): min=271, max=1608, avg=980.77, stdev=191.22 00:33:25.565 lat (msec): min=271, max=1608, avg=980.85, stdev=191.24 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 355], 5.00th=[ 600], 10.00th=[ 760], 20.00th=[ 869], 00:33:25.565 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.565 | 70.00th=[ 1036], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1318], 00:33:25.565 | 99.00th=[ 1502], 99.50th=[ 1536], 99.90th=[ 1603], 99.95th=[ 1603], 00:33:25.565 | 99.99th=[ 1603] 00:33:25.565 bw ( KiB/s): min= 2052, max= 7680, per=3.06%, avg=6912.40, stdev=1718.16, samples=10 00:33:25.565 iops : min= 16, max= 60, avg=54.00, stdev=13.43, samples=10 00:33:25.565 lat (msec) : 100=39.03%, 250=10.44%, 500=1.36%, 750=3.93%, 1000=13.01% 00:33:25.565 lat (msec) : 2000=32.22% 00:33:25.565 cpu : usr=0.23%, sys=0.34%, ctx=480, majf=0, minf=1 00:33:25.565 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.565 issued rwts: total=330,331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.565 job5: (groupid=0, jobs=1): err= 0: pid=81638: Mon Jul 22 17:11:26 2024 00:33:25.565 read: IOPS=62, BW=8030KiB/s (8223kB/s)(43.9MiB/5595msec) 00:33:25.565 slat (usec): min=9, max=864, avg=53.97, stdev=100.68 00:33:25.565 clat (msec): min=2, max=629, avg=86.85, stdev=68.17 00:33:25.565 lat (msec): min=2, max=629, avg=86.91, stdev=68.16 00:33:25.565 clat percentiles (msec): 00:33:25.565 | 1.00th=[ 5], 5.00th=[ 52], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.565 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.565 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 142], 95.00th=[ 268], 00:33:25.565 | 99.00th=[ 300], 99.50th=[ 609], 99.90th=[ 634], 99.95th=[ 634], 00:33:25.565 | 99.99th=[ 634] 00:33:25.565 bw ( KiB/s): min= 5109, max=13056, per=3.90%, avg=8931.60, stdev=2704.64, samples=10 00:33:25.565 iops : min= 39, max= 102, avg=69.60, stdev=21.30, samples=10 00:33:25.565 write: IOPS=59, BW=7595KiB/s (7778kB/s)(41.5MiB/5595msec); 0 zone resets 00:33:25.566 slat (usec): min=14, max=780, avg=52.87, stdev=77.07 00:33:25.566 clat (msec): min=16, max=1616, avg=984.52, stdev=207.25 00:33:25.566 lat (msec): min=16, max=1616, avg=984.57, stdev=207.25 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 215], 5.00th=[ 625], 10.00th=[ 735], 20.00th=[ 944], 00:33:25.566 | 30.00th=[ 986], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1020], 00:33:25.566 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1099], 95.00th=[ 1318], 00:33:25.566 | 99.00th=[ 1569], 99.50th=[ 1603], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.566 | 99.99th=[ 1620] 00:33:25.566 bw ( KiB/s): min= 2816, max= 7664, per=3.07%, avg=6934.40, stdev=1473.53, samples=10 00:33:25.566 iops : min= 22, max= 59, avg=54.00, stdev=11.42, samples=10 00:33:25.566 lat (msec) : 4=0.44%, 10=1.02%, 20=0.15%, 50=0.73%, 100=42.31% 00:33:25.566 lat (msec) : 250=3.95%, 500=4.10%, 750=3.66%, 1000=16.98%, 2000=26.65% 00:33:25.566 cpu : usr=0.07%, sys=0.41%, ctx=557, majf=0, minf=1 00:33:25.566 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:33:25.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.566 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.566 issued rwts: total=351,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.566 job6: (groupid=0, jobs=1): err= 0: pid=81667: Mon Jul 22 17:11:26 2024 00:33:25.566 read: IOPS=59, BW=7608KiB/s (7790kB/s)(41.4MiB/5569msec) 00:33:25.566 slat (usec): min=10, max=991, avg=50.35, stdev=101.49 00:33:25.566 clat (msec): min=37, max=616, avg=88.06, stdev=68.15 00:33:25.566 lat (msec): min=37, max=616, avg=88.12, stdev=68.15 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.566 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:33:25.566 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 136], 95.00th=[ 194], 00:33:25.566 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.566 | 99.99th=[ 617] 00:33:25.566 bw ( KiB/s): min= 4864, max=11752, per=3.66%, avg=8368.80, stdev=2445.98, samples=10 00:33:25.566 iops : min= 38, max= 91, avg=65.30, stdev=18.99, samples=10 00:33:25.566 write: IOPS=59, BW=7562KiB/s (7743kB/s)(41.1MiB/5569msec); 0 zone resets 00:33:25.566 slat (usec): min=15, max=6809, avg=69.69, stdev=380.78 00:33:25.566 clat (msec): min=262, max=1603, avg=991.35, stdev=178.58 00:33:25.566 lat (msec): min=268, max=1603, avg=991.42, stdev=178.50 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 326], 5.00th=[ 617], 10.00th=[ 768], 20.00th=[ 969], 00:33:25.566 | 30.00th=[ 995], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.566 | 70.00th=[ 1045], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1250], 00:33:25.566 | 99.00th=[ 1519], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:33:25.566 | 99.99th=[ 1603] 00:33:25.566 bw ( KiB/s): min= 2043, max= 7936, per=3.06%, avg=6911.50, stdev=1729.43, samples=10 00:33:25.566 iops : min= 15, max= 62, avg=53.90, stdev=13.81, samples=10 00:33:25.566 lat (msec) : 50=0.30%, 100=41.52%, 250=7.27%, 500=1.67%, 750=3.94% 00:33:25.566 lat (msec) : 1000=12.73%, 2000=32.58% 00:33:25.566 cpu : usr=0.20%, sys=0.36%, ctx=465, majf=0, minf=1 00:33:25.566 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:33:25.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.566 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.566 issued rwts: total=331,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.566 job7: (groupid=0, jobs=1): err= 0: pid=81680: Mon Jul 22 17:11:26 2024 00:33:25.566 read: IOPS=55, BW=7116KiB/s (7287kB/s)(38.6MiB/5558msec) 00:33:25.566 slat (usec): min=9, max=113, avg=31.65, stdev=16.31 00:33:25.566 clat (msec): min=47, max=618, avg=95.22, stdev=73.75 00:33:25.566 lat (msec): min=47, max=618, avg=95.25, stdev=73.75 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 53], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 67], 00:33:25.566 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.566 | 70.00th=[ 73], 80.00th=[ 103], 90.00th=[ 171], 95.00th=[ 218], 00:33:25.566 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.566 | 99.99th=[ 617] 00:33:25.566 bw ( KiB/s): min= 256, max=13851, per=3.10%, avg=7100.64, stdev=3413.29, samples=11 00:33:25.566 iops : min= 2, max= 108, avg=55.45, stdev=26.62, samples=11 00:33:25.566 write: IOPS=59, BW=7577KiB/s (7759kB/s)(41.1MiB/5558msec); 0 zone resets 00:33:25.566 slat (nsec): min=12220, max=91684, avg=36665.86, stdev=15029.04 00:33:25.566 clat (msec): min=269, max=1605, avg=989.81, stdev=187.60 00:33:25.566 lat (msec): min=269, max=1605, avg=989.85, stdev=187.61 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 351], 5.00th=[ 634], 10.00th=[ 760], 20.00th=[ 919], 00:33:25.566 | 30.00th=[ 995], 40.00th=[ 1011], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.566 | 70.00th=[ 1045], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1301], 00:33:25.566 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:33:25.566 | 99.99th=[ 1603] 00:33:25.566 bw ( KiB/s): min= 2052, max= 7680, per=3.06%, avg=6912.40, stdev=1718.16, samples=10 00:33:25.566 iops : min= 16, max= 60, avg=54.00, stdev=13.43, samples=10 00:33:25.566 lat (msec) : 50=0.31%, 100=38.40%, 250=8.78%, 500=1.57%, 750=4.39% 00:33:25.566 lat (msec) : 1000=13.48%, 2000=33.07% 00:33:25.566 cpu : usr=0.20%, sys=0.36%, ctx=370, majf=0, minf=1 00:33:25.566 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:33:25.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.566 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.566 issued rwts: total=309,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.566 job8: (groupid=0, jobs=1): err= 0: pid=81681: Mon Jul 22 17:11:26 2024 00:33:25.566 read: IOPS=52, BW=6739KiB/s (6901kB/s)(36.8MiB/5584msec) 00:33:25.566 slat (usec): min=12, max=422, avg=33.78, stdev=27.24 00:33:25.566 clat (msec): min=12, max=626, avg=89.41, stdev=77.19 00:33:25.566 lat (msec): min=12, max=626, avg=89.44, stdev=77.19 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 17], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.566 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.566 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 132], 95.00th=[ 241], 00:33:25.566 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:33:25.566 | 99.99th=[ 625] 00:33:25.566 bw ( KiB/s): min= 4352, max=15104, per=3.24%, avg=7422.10, stdev=3168.80, samples=10 00:33:25.566 iops : min= 34, max= 118, avg=57.90, stdev=24.70, samples=10 00:33:25.566 write: IOPS=59, BW=7564KiB/s (7746kB/s)(41.2MiB/5584msec); 0 zone resets 00:33:25.566 slat (usec): min=16, max=145, avg=39.42, stdev=17.20 00:33:25.566 clat (msec): min=202, max=1630, avg=1001.27, stdev=184.54 00:33:25.566 lat (msec): min=202, max=1630, avg=1001.31, stdev=184.54 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 334], 5.00th=[ 625], 10.00th=[ 768], 20.00th=[ 969], 00:33:25.566 | 30.00th=[ 995], 40.00th=[ 1011], 50.00th=[ 1028], 60.00th=[ 1045], 00:33:25.566 | 70.00th=[ 1053], 80.00th=[ 1070], 90.00th=[ 1099], 95.00th=[ 1284], 00:33:25.566 | 99.00th=[ 1519], 99.50th=[ 1636], 99.90th=[ 1636], 99.95th=[ 1636], 00:33:25.566 | 99.99th=[ 1636] 00:33:25.566 bw ( KiB/s): min= 256, max= 7680, per=2.79%, avg=6305.45, stdev=2585.32, samples=11 00:33:25.566 iops : min= 2, max= 60, avg=49.18, stdev=20.15, samples=11 00:33:25.566 lat (msec) : 20=0.80%, 50=1.28%, 100=37.98%, 250=5.13%, 500=2.56% 00:33:25.566 lat (msec) : 750=4.33%, 1000=11.70%, 2000=36.22% 00:33:25.566 cpu : usr=0.16%, sys=0.41%, ctx=382, majf=0, minf=1 00:33:25.566 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:33:25.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.566 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.566 issued rwts: total=294,330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.566 job9: (groupid=0, jobs=1): err= 0: pid=81684: Mon Jul 22 17:11:26 2024 00:33:25.566 read: IOPS=65, BW=8381KiB/s (8583kB/s)(45.5MiB/5559msec) 00:33:25.566 slat (usec): min=8, max=453, avg=31.40, stdev=27.65 00:33:25.566 clat (msec): min=47, max=598, avg=95.48, stdev=68.26 00:33:25.566 lat (msec): min=47, max=598, avg=95.51, stdev=68.26 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.566 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.566 | 70.00th=[ 75], 80.00th=[ 117], 90.00th=[ 167], 95.00th=[ 211], 00:33:25.566 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:33:25.566 | 99.99th=[ 600] 00:33:25.566 bw ( KiB/s): min= 256, max=19968, per=3.66%, avg=8378.18, stdev=5002.28, samples=11 00:33:25.566 iops : min= 2, max= 156, avg=65.45, stdev=39.08, samples=11 00:33:25.566 write: IOPS=59, BW=7575KiB/s (7757kB/s)(41.1MiB/5559msec); 0 zone resets 00:33:25.566 slat (usec): min=14, max=502, avg=39.28, stdev=38.21 00:33:25.566 clat (msec): min=259, max=1603, avg=973.86, stdev=189.94 00:33:25.566 lat (msec): min=259, max=1603, avg=973.90, stdev=189.94 00:33:25.566 clat percentiles (msec): 00:33:25.566 | 1.00th=[ 313], 5.00th=[ 609], 10.00th=[ 760], 20.00th=[ 835], 00:33:25.566 | 30.00th=[ 978], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.566 | 70.00th=[ 1036], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1234], 00:33:25.566 | 99.00th=[ 1519], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1603], 00:33:25.566 | 99.99th=[ 1603] 00:33:25.566 bw ( KiB/s): min= 2048, max= 7680, per=3.06%, avg=6912.00, stdev=1719.42, samples=10 00:33:25.566 iops : min= 16, max= 60, avg=54.00, stdev=13.43, samples=10 00:33:25.566 lat (msec) : 50=0.58%, 100=39.83%, 250=10.82%, 500=1.88%, 750=3.75% 00:33:25.566 lat (msec) : 1000=13.13%, 2000=30.01% 00:33:25.566 cpu : usr=0.13%, sys=0.40%, ctx=417, majf=0, minf=1 00:33:25.566 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:33:25.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.566 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.566 issued rwts: total=364,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.567 job10: (groupid=0, jobs=1): err= 0: pid=81691: Mon Jul 22 17:11:26 2024 00:33:25.567 read: IOPS=64, BW=8310KiB/s (8509kB/s)(44.9MiB/5530msec) 00:33:25.567 slat (usec): min=11, max=208, avg=37.25, stdev=24.56 00:33:25.567 clat (msec): min=47, max=592, avg=95.18, stdev=82.56 00:33:25.567 lat (msec): min=47, max=592, avg=95.21, stdev=82.56 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 49], 5.00th=[ 54], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.567 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.567 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 174], 95.00th=[ 236], 00:33:25.567 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:33:25.567 | 99.99th=[ 592] 00:33:25.567 bw ( KiB/s): min= 256, max=13568, per=3.57%, avg=8168.73, stdev=3766.69, samples=11 00:33:25.567 iops : min= 2, max= 106, avg=63.82, stdev=29.43, samples=11 00:33:25.567 write: IOPS=58, BW=7523KiB/s (7703kB/s)(40.6MiB/5530msec); 0 zone resets 00:33:25.567 slat (usec): min=16, max=681, avg=53.10, stdev=68.87 00:33:25.567 clat (msec): min=266, max=1562, avg=981.79, stdev=184.91 00:33:25.567 lat (msec): min=266, max=1562, avg=981.84, stdev=184.91 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 347], 5.00th=[ 617], 10.00th=[ 768], 20.00th=[ 936], 00:33:25.567 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1020], 00:33:25.567 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1267], 00:33:25.567 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:33:25.567 | 99.99th=[ 1569] 00:33:25.567 bw ( KiB/s): min= 2048, max= 7936, per=3.06%, avg=6912.00, stdev=1723.65, samples=10 00:33:25.567 iops : min= 16, max= 62, avg=54.00, stdev=13.47, samples=10 00:33:25.567 lat (msec) : 50=0.88%, 100=42.69%, 250=7.46%, 500=1.46%, 750=4.53% 00:33:25.567 lat (msec) : 1000=16.52%, 2000=26.46% 00:33:25.567 cpu : usr=0.18%, sys=0.42%, ctx=397, majf=0, minf=1 00:33:25.567 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:33:25.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.567 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.567 issued rwts: total=359,325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.567 job11: (groupid=0, jobs=1): err= 0: pid=81719: Mon Jul 22 17:11:26 2024 00:33:25.567 read: IOPS=57, BW=7329KiB/s (7505kB/s)(39.9MiB/5571msec) 00:33:25.567 slat (usec): min=10, max=463, avg=30.73, stdev=34.82 00:33:25.567 clat (msec): min=27, max=620, avg=93.77, stdev=67.32 00:33:25.567 lat (msec): min=27, max=620, avg=93.80, stdev=67.32 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 49], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 67], 00:33:25.567 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 70], 00:33:25.567 | 70.00th=[ 73], 80.00th=[ 109], 90.00th=[ 180], 95.00th=[ 205], 00:33:25.567 | 99.00th=[ 255], 99.50th=[ 600], 99.90th=[ 625], 99.95th=[ 625], 00:33:25.567 | 99.99th=[ 625] 00:33:25.567 bw ( KiB/s): min= 3584, max=16896, per=3.54%, avg=8087.70, stdev=3769.77, samples=10 00:33:25.567 iops : min= 28, max= 132, avg=63.10, stdev=29.42, samples=10 00:33:25.567 write: IOPS=59, BW=7559KiB/s (7741kB/s)(41.1MiB/5571msec); 0 zone resets 00:33:25.567 slat (usec): min=13, max=385, avg=36.25, stdev=37.64 00:33:25.567 clat (msec): min=272, max=1626, avg=990.79, stdev=194.67 00:33:25.567 lat (msec): min=272, max=1626, avg=990.83, stdev=194.67 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 326], 5.00th=[ 642], 10.00th=[ 776], 20.00th=[ 902], 00:33:25.567 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.567 | 70.00th=[ 1045], 80.00th=[ 1053], 90.00th=[ 1099], 95.00th=[ 1334], 00:33:25.567 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.567 | 99.99th=[ 1620] 00:33:25.567 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=6884.90, stdev=1716.57, samples=10 00:33:25.567 iops : min= 16, max= 62, avg=53.70, stdev=13.38, samples=10 00:33:25.567 lat (msec) : 50=0.77%, 100=38.27%, 250=9.41%, 500=1.54%, 750=4.01% 00:33:25.567 lat (msec) : 1000=16.05%, 2000=29.94% 00:33:25.567 cpu : usr=0.09%, sys=0.38%, ctx=415, majf=0, minf=1 00:33:25.567 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:33:25.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.567 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.567 issued rwts: total=319,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.567 job12: (groupid=0, jobs=1): err= 0: pid=81729: Mon Jul 22 17:11:26 2024 00:33:25.567 read: IOPS=68, BW=8765KiB/s (8975kB/s)(47.6MiB/5564msec) 00:33:25.567 slat (usec): min=10, max=1104, avg=47.31, stdev=86.88 00:33:25.567 clat (msec): min=47, max=601, avg=90.91, stdev=62.86 00:33:25.567 lat (msec): min=47, max=601, avg=90.96, stdev=62.85 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.567 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:33:25.567 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 165], 95.00th=[ 241], 00:33:25.567 | 99.00th=[ 271], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:33:25.567 | 99.99th=[ 600] 00:33:25.567 bw ( KiB/s): min= 256, max=15360, per=3.86%, avg=8820.36, stdev=3899.89, samples=11 00:33:25.567 iops : min= 2, max= 120, avg=68.91, stdev=30.47, samples=11 00:33:25.567 write: IOPS=59, BW=7615KiB/s (7797kB/s)(41.4MiB/5564msec); 0 zone resets 00:33:25.567 slat (usec): min=12, max=3362, avg=69.90, stdev=210.23 00:33:25.567 clat (msec): min=275, max=1522, avg=968.55, stdev=183.85 00:33:25.567 lat (msec): min=278, max=1522, avg=968.62, stdev=183.82 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 384], 5.00th=[ 600], 10.00th=[ 743], 20.00th=[ 860], 00:33:25.567 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:33:25.567 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1301], 00:33:25.567 | 99.00th=[ 1485], 99.50th=[ 1502], 99.90th=[ 1519], 99.95th=[ 1519], 00:33:25.567 | 99.99th=[ 1519] 00:33:25.567 bw ( KiB/s): min= 1792, max= 7680, per=3.06%, avg=6912.00, stdev=1806.17, samples=10 00:33:25.567 iops : min= 14, max= 60, avg=54.00, stdev=14.11, samples=10 00:33:25.567 lat (msec) : 50=0.84%, 100=43.12%, 250=7.30%, 500=2.95%, 750=4.07% 00:33:25.567 lat (msec) : 1000=17.28%, 2000=24.44% 00:33:25.567 cpu : usr=0.16%, sys=0.34%, ctx=573, majf=0, minf=1 00:33:25.567 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:33:25.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.567 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.567 issued rwts: total=381,331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.567 job13: (groupid=0, jobs=1): err= 0: pid=81767: Mon Jul 22 17:11:26 2024 00:33:25.567 read: IOPS=63, BW=8139KiB/s (8334kB/s)(44.5MiB/5599msec) 00:33:25.567 slat (usec): min=8, max=732, avg=35.69, stdev=43.00 00:33:25.567 clat (usec): min=1139, max=637859, avg=77987.16, stdev=68435.18 00:33:25.567 lat (usec): min=1218, max=637884, avg=78022.85, stdev=68434.62 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 37], 20.00th=[ 65], 00:33:25.567 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:33:25.567 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 102], 95.00th=[ 167], 00:33:25.567 | 99.00th=[ 313], 99.50th=[ 617], 99.90th=[ 642], 99.95th=[ 642], 00:33:25.567 | 99.99th=[ 642] 00:33:25.567 bw ( KiB/s): min= 255, max=17408, per=3.59%, avg=8215.18, stdev=4144.63, samples=11 00:33:25.567 iops : min= 1, max= 136, avg=64.09, stdev=32.57, samples=11 00:33:25.567 write: IOPS=59, BW=7613KiB/s (7795kB/s)(41.6MiB/5599msec); 0 zone resets 00:33:25.567 slat (usec): min=10, max=382, avg=41.88, stdev=39.16 00:33:25.567 clat (msec): min=9, max=1589, avg=990.58, stdev=214.11 00:33:25.567 lat (msec): min=9, max=1589, avg=990.62, stdev=214.12 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 31], 5.00th=[ 600], 10.00th=[ 776], 20.00th=[ 953], 00:33:25.567 | 30.00th=[ 995], 40.00th=[ 1011], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.567 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1150], 95.00th=[ 1301], 00:33:25.567 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1586], 99.95th=[ 1586], 00:33:25.567 | 99.99th=[ 1586] 00:33:25.567 bw ( KiB/s): min= 3072, max= 7680, per=3.09%, avg=6988.80, stdev=1412.78, samples=10 00:33:25.567 iops : min= 24, max= 60, avg=54.60, stdev=11.04, samples=10 00:33:25.567 lat (msec) : 2=0.44%, 4=0.29%, 10=3.48%, 20=0.29%, 50=2.18% 00:33:25.567 lat (msec) : 100=40.35%, 250=3.34%, 500=2.61%, 750=3.05%, 1000=11.47% 00:33:25.567 lat (msec) : 2000=32.51% 00:33:25.567 cpu : usr=0.16%, sys=0.41%, ctx=424, majf=0, minf=1 00:33:25.567 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:33:25.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.567 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.567 issued rwts: total=356,333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.567 job14: (groupid=0, jobs=1): err= 0: pid=81776: Mon Jul 22 17:11:26 2024 00:33:25.567 read: IOPS=57, BW=7317KiB/s (7493kB/s)(40.0MiB/5598msec) 00:33:25.567 slat (usec): min=9, max=551, avg=39.28, stdev=46.41 00:33:25.567 clat (msec): min=4, max=623, avg=90.08, stdev=76.16 00:33:25.567 lat (msec): min=4, max=623, avg=90.11, stdev=76.16 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 8], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.567 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.567 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 138], 95.00th=[ 309], 00:33:25.567 | 99.00th=[ 334], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:33:25.567 | 99.99th=[ 625] 00:33:25.567 bw ( KiB/s): min= 6400, max=11497, per=3.56%, avg=8138.50, stdev=1567.06, samples=10 00:33:25.567 iops : min= 50, max= 89, avg=63.50, stdev=12.05, samples=10 00:33:25.567 write: IOPS=59, BW=7591KiB/s (7773kB/s)(41.5MiB/5598msec); 0 zone resets 00:33:25.567 slat (usec): min=13, max=3965, avg=67.45, stdev=240.62 00:33:25.567 clat (msec): min=67, max=1576, avg=989.84, stdev=194.61 00:33:25.567 lat (msec): min=67, max=1576, avg=989.91, stdev=194.61 00:33:25.567 clat percentiles (msec): 00:33:25.567 | 1.00th=[ 305], 5.00th=[ 634], 10.00th=[ 760], 20.00th=[ 944], 00:33:25.567 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.567 | 70.00th=[ 1036], 80.00th=[ 1053], 90.00th=[ 1099], 95.00th=[ 1284], 00:33:25.568 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:33:25.568 | 99.99th=[ 1569] 00:33:25.568 bw ( KiB/s): min= 255, max= 7680, per=2.79%, avg=6306.36, stdev=2538.59, samples=11 00:33:25.568 iops : min= 1, max= 60, avg=49.09, stdev=20.22, samples=11 00:33:25.568 lat (msec) : 10=0.92%, 20=0.46%, 50=0.46%, 100=40.95%, 250=3.22% 00:33:25.568 lat (msec) : 500=4.14%, 750=3.99%, 1000=15.64%, 2000=30.21% 00:33:25.568 cpu : usr=0.23%, sys=0.32%, ctx=470, majf=0, minf=1 00:33:25.568 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:33:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.568 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.568 issued rwts: total=320,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.568 job15: (groupid=0, jobs=1): err= 0: pid=81833: Mon Jul 22 17:11:26 2024 00:33:25.568 read: IOPS=57, BW=7382KiB/s (7559kB/s)(40.1MiB/5566msec) 00:33:25.568 slat (usec): min=8, max=484, avg=28.92, stdev=32.80 00:33:25.568 clat (msec): min=50, max=614, avg=90.83, stdev=61.87 00:33:25.568 lat (msec): min=50, max=614, avg=90.86, stdev=61.87 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 52], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.568 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.568 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 161], 95.00th=[ 228], 00:33:25.568 | 99.00th=[ 255], 99.50th=[ 592], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.568 | 99.99th=[ 617] 00:33:25.568 bw ( KiB/s): min= 5376, max=14592, per=3.57%, avg=8166.40, stdev=2642.44, samples=10 00:33:25.568 iops : min= 42, max= 114, avg=63.80, stdev=20.64, samples=10 00:33:25.568 write: IOPS=59, BW=7589KiB/s (7771kB/s)(41.2MiB/5566msec); 0 zone resets 00:33:25.568 slat (usec): min=13, max=423, avg=35.39, stdev=38.01 00:33:25.568 clat (msec): min=276, max=1618, avg=989.05, stdev=187.82 00:33:25.568 lat (msec): min=276, max=1618, avg=989.09, stdev=187.83 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 351], 5.00th=[ 617], 10.00th=[ 760], 20.00th=[ 919], 00:33:25.568 | 30.00th=[ 969], 40.00th=[ 995], 50.00th=[ 1020], 60.00th=[ 1036], 00:33:25.568 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1083], 95.00th=[ 1284], 00:33:25.568 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.568 | 99.99th=[ 1620] 00:33:25.568 bw ( KiB/s): min= 2048, max= 7680, per=3.04%, avg=6886.40, stdev=1708.59, samples=10 00:33:25.568 iops : min= 16, max= 60, avg=53.80, stdev=13.35, samples=10 00:33:25.568 lat (msec) : 100=40.09%, 250=8.60%, 500=1.54%, 750=3.84%, 1000=16.44% 00:33:25.568 lat (msec) : 2000=29.49% 00:33:25.568 cpu : usr=0.14%, sys=0.29%, ctx=413, majf=0, minf=1 00:33:25.568 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:33:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.568 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.568 issued rwts: total=321,330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.568 job16: (groupid=0, jobs=1): err= 0: pid=81834: Mon Jul 22 17:11:26 2024 00:33:25.568 read: IOPS=66, BW=8509KiB/s (8713kB/s)(46.5MiB/5596msec) 00:33:25.568 slat (usec): min=8, max=732, avg=35.84, stdev=48.57 00:33:25.568 clat (usec): min=1953, max=660085, avg=96440.16, stdev=73865.48 00:33:25.568 lat (usec): min=1964, max=660104, avg=96476.00, stdev=73862.53 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 4], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.568 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:33:25.568 | 70.00th=[ 72], 80.00th=[ 109], 90.00th=[ 190], 95.00th=[ 275], 00:33:25.568 | 99.00th=[ 317], 99.50th=[ 642], 99.90th=[ 659], 99.95th=[ 659], 00:33:25.568 | 99.99th=[ 659] 00:33:25.568 bw ( KiB/s): min= 255, max=16640, per=3.76%, avg=8608.09, stdev=4204.77, samples=11 00:33:25.568 iops : min= 1, max= 130, avg=67.00, stdev=33.10, samples=11 00:33:25.568 write: IOPS=59, BW=7594KiB/s (7776kB/s)(41.5MiB/5596msec); 0 zone resets 00:33:25.568 slat (usec): min=13, max=1001, avg=48.65, stdev=79.14 00:33:25.568 clat (msec): min=9, max=1629, avg=968.56, stdev=211.05 00:33:25.568 lat (msec): min=9, max=1629, avg=968.60, stdev=211.06 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 157], 5.00th=[ 625], 10.00th=[ 735], 20.00th=[ 852], 00:33:25.568 | 30.00th=[ 978], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1020], 00:33:25.568 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1267], 00:33:25.568 | 99.00th=[ 1536], 99.50th=[ 1636], 99.90th=[ 1636], 99.95th=[ 1636], 00:33:25.568 | 99.99th=[ 1636] 00:33:25.568 bw ( KiB/s): min= 2560, max= 7920, per=3.07%, avg=6934.50, stdev=1559.98, samples=10 00:33:25.568 iops : min= 20, max= 61, avg=54.00, stdev=12.10, samples=10 00:33:25.568 lat (msec) : 2=0.28%, 4=0.43%, 10=0.28%, 50=0.43%, 100=40.77% 00:33:25.568 lat (msec) : 250=8.24%, 500=3.69%, 750=4.12%, 1000=14.63%, 2000=27.13% 00:33:25.568 cpu : usr=0.09%, sys=0.43%, ctx=516, majf=0, minf=1 00:33:25.568 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:33:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.568 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.568 issued rwts: total=372,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.568 job17: (groupid=0, jobs=1): err= 0: pid=81835: Mon Jul 22 17:11:26 2024 00:33:25.568 read: IOPS=52, BW=6767KiB/s (6930kB/s)(36.8MiB/5561msec) 00:33:25.568 slat (usec): min=9, max=4846, avg=55.84, stdev=296.77 00:33:25.568 clat (msec): min=49, max=611, avg=96.28, stdev=70.09 00:33:25.568 lat (msec): min=49, max=611, avg=96.33, stdev=70.10 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.568 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:33:25.568 | 70.00th=[ 73], 80.00th=[ 116], 90.00th=[ 180], 95.00th=[ 228], 00:33:25.568 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:33:25.568 | 99.99th=[ 609] 00:33:25.568 bw ( KiB/s): min= 1792, max=15104, per=3.26%, avg=7449.60, stdev=3517.24, samples=10 00:33:25.568 iops : min= 14, max= 118, avg=58.20, stdev=27.48, samples=10 00:33:25.568 write: IOPS=59, BW=7573KiB/s (7754kB/s)(41.1MiB/5561msec); 0 zone resets 00:33:25.568 slat (usec): min=13, max=109, avg=35.98, stdev=15.12 00:33:25.568 clat (msec): min=272, max=1615, avg=992.50, stdev=184.10 00:33:25.568 lat (msec): min=272, max=1616, avg=992.54, stdev=184.10 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 351], 5.00th=[ 651], 10.00th=[ 785], 20.00th=[ 902], 00:33:25.568 | 30.00th=[ 969], 40.00th=[ 1011], 50.00th=[ 1028], 60.00th=[ 1036], 00:33:25.568 | 70.00th=[ 1053], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1284], 00:33:25.568 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.568 | 99.99th=[ 1620] 00:33:25.568 bw ( KiB/s): min= 2048, max= 7680, per=3.04%, avg=6886.40, stdev=1712.84, samples=10 00:33:25.568 iops : min= 16, max= 60, avg=53.80, stdev=13.38, samples=10 00:33:25.568 lat (msec) : 50=0.16%, 100=36.12%, 250=9.95%, 500=1.77%, 750=3.85% 00:33:25.568 lat (msec) : 1000=15.09%, 2000=33.07% 00:33:25.568 cpu : usr=0.20%, sys=0.32%, ctx=398, majf=0, minf=1 00:33:25.568 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:33:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.568 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.568 issued rwts: total=294,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.568 job18: (groupid=0, jobs=1): err= 0: pid=81836: Mon Jul 22 17:11:26 2024 00:33:25.568 read: IOPS=58, BW=7443KiB/s (7622kB/s)(40.5MiB/5572msec) 00:33:25.568 slat (usec): min=9, max=349, avg=36.17, stdev=35.57 00:33:25.568 clat (msec): min=28, max=639, avg=87.84, stdev=64.82 00:33:25.568 lat (msec): min=28, max=639, avg=87.88, stdev=64.82 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 53], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.568 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.568 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 133], 95.00th=[ 207], 00:33:25.568 | 99.00th=[ 264], 99.50th=[ 617], 99.90th=[ 642], 99.95th=[ 642], 00:33:25.568 | 99.99th=[ 642] 00:33:25.568 bw ( KiB/s): min= 6387, max=14080, per=3.59%, avg=8216.30, stdev=2178.30, samples=10 00:33:25.568 iops : min= 49, max= 110, avg=64.10, stdev=17.10, samples=10 00:33:25.568 write: IOPS=59, BW=7558KiB/s (7739kB/s)(41.1MiB/5572msec); 0 zone resets 00:33:25.568 slat (usec): min=14, max=595, avg=53.70, stdev=82.19 00:33:25.568 clat (msec): min=274, max=1557, avg=995.34, stdev=182.22 00:33:25.568 lat (msec): min=274, max=1557, avg=995.40, stdev=182.22 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 321], 5.00th=[ 642], 10.00th=[ 760], 20.00th=[ 961], 00:33:25.568 | 30.00th=[ 995], 40.00th=[ 1003], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.568 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1318], 00:33:25.568 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:33:25.568 | 99.99th=[ 1552] 00:33:25.568 bw ( KiB/s): min= 2048, max= 7936, per=3.04%, avg=6884.90, stdev=1721.06, samples=10 00:33:25.568 iops : min= 16, max= 62, avg=53.70, stdev=13.43, samples=10 00:33:25.568 lat (msec) : 50=0.46%, 100=41.35%, 250=7.04%, 500=1.53%, 750=3.98% 00:33:25.568 lat (msec) : 1000=13.32%, 2000=32.31% 00:33:25.568 cpu : usr=0.11%, sys=0.41%, ctx=496, majf=0, minf=1 00:33:25.568 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:33:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.568 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.568 issued rwts: total=324,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.568 job19: (groupid=0, jobs=1): err= 0: pid=81837: Mon Jul 22 17:11:26 2024 00:33:25.568 read: IOPS=65, BW=8421KiB/s (8623kB/s)(45.8MiB/5563msec) 00:33:25.568 slat (usec): min=11, max=211, avg=31.47, stdev=18.58 00:33:25.568 clat (msec): min=44, max=625, avg=90.39, stdev=60.58 00:33:25.568 lat (msec): min=44, max=625, avg=90.43, stdev=60.58 00:33:25.568 clat percentiles (msec): 00:33:25.568 | 1.00th=[ 52], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.568 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.568 | 70.00th=[ 72], 80.00th=[ 86], 90.00th=[ 159], 95.00th=[ 213], 00:33:25.569 | 99.00th=[ 271], 99.50th=[ 617], 99.90th=[ 625], 99.95th=[ 625], 00:33:25.569 | 99.99th=[ 625] 00:33:25.569 bw ( KiB/s): min= 5888, max=16640, per=4.07%, avg=9318.40, stdev=3051.07, samples=10 00:33:25.569 iops : min= 46, max= 130, avg=72.80, stdev=23.84, samples=10 00:33:25.569 write: IOPS=59, BW=7593KiB/s (7775kB/s)(41.2MiB/5563msec); 0 zone resets 00:33:25.569 slat (usec): min=11, max=326, avg=39.89, stdev=27.46 00:33:25.569 clat (msec): min=274, max=1549, avg=976.57, stdev=191.04 00:33:25.569 lat (msec): min=274, max=1549, avg=976.61, stdev=191.04 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 347], 5.00th=[ 609], 10.00th=[ 735], 20.00th=[ 877], 00:33:25.569 | 30.00th=[ 969], 40.00th=[ 995], 50.00th=[ 1003], 60.00th=[ 1020], 00:33:25.569 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1368], 00:33:25.569 | 99.00th=[ 1536], 99.50th=[ 1552], 99.90th=[ 1552], 99.95th=[ 1552], 00:33:25.569 | 99.99th=[ 1552] 00:33:25.569 bw ( KiB/s): min= 2048, max= 7680, per=3.04%, avg=6886.40, stdev=1712.84, samples=10 00:33:25.569 iops : min= 16, max= 60, avg=53.80, stdev=13.38, samples=10 00:33:25.569 lat (msec) : 50=0.43%, 100=42.53%, 250=8.33%, 500=2.16%, 750=4.17% 00:33:25.569 lat (msec) : 1000=17.10%, 2000=25.29% 00:33:25.569 cpu : usr=0.16%, sys=0.41%, ctx=395, majf=0, minf=1 00:33:25.569 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=90.9% 00:33:25.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.569 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.569 issued rwts: total=366,330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.569 job20: (groupid=0, jobs=1): err= 0: pid=81838: Mon Jul 22 17:11:26 2024 00:33:25.569 read: IOPS=56, BW=7175KiB/s (7347kB/s)(39.1MiB/5584msec) 00:33:25.569 slat (usec): min=9, max=153, avg=30.48, stdev=16.64 00:33:25.569 clat (msec): min=23, max=612, avg=83.58, stdev=55.20 00:33:25.569 lat (msec): min=23, max=612, avg=83.61, stdev=55.20 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 52], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.569 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.569 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 124], 95.00th=[ 182], 00:33:25.569 | 99.00th=[ 275], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:33:25.569 | 99.99th=[ 609] 00:33:25.569 bw ( KiB/s): min= 256, max=12032, per=3.16%, avg=7236.09, stdev=3034.48, samples=11 00:33:25.569 iops : min= 2, max= 94, avg=56.45, stdev=23.65, samples=11 00:33:25.569 write: IOPS=59, BW=7564KiB/s (7746kB/s)(41.2MiB/5584msec); 0 zone resets 00:33:25.569 slat (usec): min=12, max=118, avg=34.48, stdev=14.34 00:33:25.569 clat (msec): min=276, max=1616, avg=1001.69, stdev=191.53 00:33:25.569 lat (msec): min=276, max=1616, avg=1001.72, stdev=191.53 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 351], 5.00th=[ 617], 10.00th=[ 743], 20.00th=[ 978], 00:33:25.569 | 30.00th=[ 1003], 40.00th=[ 1011], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.569 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1368], 00:33:25.569 | 99.00th=[ 1569], 99.50th=[ 1586], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.569 | 99.99th=[ 1620] 00:33:25.569 bw ( KiB/s): min= 1792, max= 7920, per=3.04%, avg=6884.80, stdev=1802.92, samples=10 00:33:25.569 iops : min= 14, max= 61, avg=53.70, stdev=14.03, samples=10 00:33:25.569 lat (msec) : 50=0.31%, 100=42.15%, 250=5.44%, 500=1.56%, 750=4.35% 00:33:25.569 lat (msec) : 1000=10.26%, 2000=35.93% 00:33:25.569 cpu : usr=0.16%, sys=0.34%, ctx=371, majf=0, minf=1 00:33:25.569 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:33:25.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.569 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.569 issued rwts: total=313,330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.569 job21: (groupid=0, jobs=1): err= 0: pid=81839: Mon Jul 22 17:11:26 2024 00:33:25.569 read: IOPS=55, BW=7132KiB/s (7303kB/s)(38.8MiB/5564msec) 00:33:25.569 slat (usec): min=9, max=389, avg=31.42, stdev=24.43 00:33:25.569 clat (msec): min=49, max=620, avg=87.07, stdev=58.62 00:33:25.569 lat (msec): min=49, max=620, avg=87.11, stdev=58.62 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 53], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.569 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.569 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 142], 95.00th=[ 220], 00:33:25.569 | 99.00th=[ 253], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.569 | 99.99th=[ 617] 00:33:25.569 bw ( KiB/s): min= 256, max=11776, per=3.13%, avg=7168.00, stdev=3063.45, samples=11 00:33:25.569 iops : min= 2, max= 92, avg=56.00, stdev=23.93, samples=11 00:33:25.569 write: IOPS=59, BW=7615KiB/s (7797kB/s)(41.4MiB/5564msec); 0 zone resets 00:33:25.569 slat (usec): min=11, max=378, avg=35.54, stdev=23.63 00:33:25.569 clat (msec): min=268, max=1574, avg=992.31, stdev=189.77 00:33:25.569 lat (msec): min=268, max=1574, avg=992.34, stdev=189.78 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 355], 5.00th=[ 600], 10.00th=[ 735], 20.00th=[ 953], 00:33:25.569 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1036], 00:33:25.569 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1083], 95.00th=[ 1318], 00:33:25.569 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:33:25.569 | 99.99th=[ 1569] 00:33:25.569 bw ( KiB/s): min= 2048, max= 7680, per=3.06%, avg=6912.00, stdev=1719.42, samples=10 00:33:25.569 iops : min= 16, max= 60, avg=54.00, stdev=13.43, samples=10 00:33:25.569 lat (msec) : 50=0.31%, 100=40.41%, 250=6.71%, 500=1.87%, 750=4.52% 00:33:25.569 lat (msec) : 1000=14.82%, 2000=31.36% 00:33:25.569 cpu : usr=0.07%, sys=0.40%, ctx=376, majf=0, minf=1 00:33:25.569 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:33:25.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.569 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.569 issued rwts: total=310,331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.569 job22: (groupid=0, jobs=1): err= 0: pid=81840: Mon Jul 22 17:11:26 2024 00:33:25.569 read: IOPS=65, BW=8336KiB/s (8536kB/s)(45.5MiB/5589msec) 00:33:25.569 slat (usec): min=10, max=190, avg=31.40, stdev=19.33 00:33:25.569 clat (msec): min=9, max=642, avg=80.46, stdev=57.32 00:33:25.569 lat (msec): min=9, max=642, avg=80.49, stdev=57.32 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 18], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 65], 00:33:25.569 | 30.00th=[ 67], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.569 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 102], 95.00th=[ 153], 00:33:25.569 | 99.00th=[ 300], 99.50th=[ 625], 99.90th=[ 642], 99.95th=[ 642], 00:33:25.569 | 99.99th=[ 642] 00:33:25.569 bw ( KiB/s): min= 7424, max=13312, per=4.05%, avg=9265.40, stdev=1938.24, samples=10 00:33:25.569 iops : min= 58, max= 104, avg=72.30, stdev=15.17, samples=10 00:33:25.569 write: IOPS=59, BW=7581KiB/s (7763kB/s)(41.4MiB/5589msec); 0 zone resets 00:33:25.569 slat (usec): min=15, max=419, avg=39.86, stdev=32.28 00:33:25.569 clat (msec): min=204, max=1560, avg=989.99, stdev=188.82 00:33:25.569 lat (msec): min=204, max=1560, avg=990.03, stdev=188.83 00:33:25.569 clat percentiles (msec): 00:33:25.569 | 1.00th=[ 321], 5.00th=[ 625], 10.00th=[ 726], 20.00th=[ 969], 00:33:25.569 | 30.00th=[ 986], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1020], 00:33:25.569 | 70.00th=[ 1028], 80.00th=[ 1062], 90.00th=[ 1099], 95.00th=[ 1318], 00:33:25.569 | 99.00th=[ 1502], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:33:25.569 | 99.99th=[ 1569] 00:33:25.569 bw ( KiB/s): min= 2048, max= 7680, per=3.06%, avg=6910.40, stdev=1722.86, samples=10 00:33:25.569 iops : min= 16, max= 60, avg=53.90, stdev=13.42, samples=10 00:33:25.569 lat (msec) : 10=0.29%, 20=0.43%, 50=1.58%, 100=44.75%, 250=4.32% 00:33:25.569 lat (msec) : 500=1.87%, 750=4.46%, 1000=15.40%, 2000=26.91% 00:33:25.569 cpu : usr=0.14%, sys=0.43%, ctx=422, majf=0, minf=1 00:33:25.569 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:33:25.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.569 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.569 issued rwts: total=364,331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.569 job23: (groupid=0, jobs=1): err= 0: pid=81841: Mon Jul 22 17:11:26 2024 00:33:25.569 read: IOPS=53, BW=6878KiB/s (7043kB/s)(37.5MiB/5583msec) 00:33:25.569 slat (usec): min=10, max=222, avg=31.00, stdev=18.31 00:33:25.570 clat (msec): min=19, max=643, avg=87.89, stdev=70.10 00:33:25.570 lat (msec): min=19, max=643, avg=87.92, stdev=70.10 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 28], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.570 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.570 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 131], 95.00th=[ 213], 00:33:25.570 | 99.00th=[ 300], 99.50th=[ 625], 99.90th=[ 642], 99.95th=[ 642], 00:33:25.570 | 99.99th=[ 642] 00:33:25.570 bw ( KiB/s): min= 256, max=15584, per=3.02%, avg=6907.45, stdev=3987.68, samples=11 00:33:25.570 iops : min= 2, max= 121, avg=53.82, stdev=30.95, samples=11 00:33:25.570 write: IOPS=58, BW=7543KiB/s (7724kB/s)(41.1MiB/5583msec); 0 zone resets 00:33:25.570 slat (usec): min=16, max=276, avg=36.74, stdev=22.62 00:33:25.570 clat (msec): min=264, max=1611, avg=1003.84, stdev=190.47 00:33:25.570 lat (msec): min=264, max=1611, avg=1003.88, stdev=190.47 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 321], 5.00th=[ 625], 10.00th=[ 802], 20.00th=[ 978], 00:33:25.570 | 30.00th=[ 995], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1028], 00:33:25.570 | 70.00th=[ 1045], 80.00th=[ 1062], 90.00th=[ 1083], 95.00th=[ 1368], 00:33:25.570 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.570 | 99.99th=[ 1620] 00:33:25.570 bw ( KiB/s): min= 2043, max= 7680, per=3.04%, avg=6884.40, stdev=1718.39, samples=10 00:33:25.570 iops : min= 15, max= 60, avg=53.60, stdev=13.71, samples=10 00:33:25.570 lat (msec) : 20=0.32%, 50=1.59%, 100=38.47%, 250=5.25%, 500=2.86% 00:33:25.570 lat (msec) : 750=3.66%, 1000=15.10%, 2000=32.75% 00:33:25.570 cpu : usr=0.11%, sys=0.39%, ctx=395, majf=0, minf=1 00:33:25.570 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:33:25.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.570 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.570 issued rwts: total=300,329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.570 job24: (groupid=0, jobs=1): err= 0: pid=81842: Mon Jul 22 17:11:26 2024 00:33:25.570 read: IOPS=63, BW=8092KiB/s (8286kB/s)(43.9MiB/5552msec) 00:33:25.570 slat (usec): min=10, max=3721, avg=42.15, stdev=197.78 00:33:25.570 clat (msec): min=23, max=616, avg=90.61, stdev=77.87 00:33:25.570 lat (msec): min=23, max=616, avg=90.65, stdev=77.87 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 64], 20.00th=[ 66], 00:33:25.570 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.570 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 150], 95.00th=[ 230], 00:33:25.570 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.570 | 99.99th=[ 617] 00:33:25.570 bw ( KiB/s): min= 256, max=12800, per=3.51%, avg=8030.27, stdev=3124.88, samples=11 00:33:25.570 iops : min= 2, max= 100, avg=62.73, stdev=24.42, samples=11 00:33:25.570 write: IOPS=58, BW=7516KiB/s (7696kB/s)(40.8MiB/5552msec); 0 zone resets 00:33:25.570 slat (usec): min=13, max=186, avg=37.59, stdev=20.60 00:33:25.570 clat (msec): min=268, max=1551, avg=989.71, stdev=179.93 00:33:25.570 lat (msec): min=268, max=1551, avg=989.75, stdev=179.93 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 338], 5.00th=[ 634], 10.00th=[ 785], 20.00th=[ 961], 00:33:25.570 | 30.00th=[ 986], 40.00th=[ 995], 50.00th=[ 1011], 60.00th=[ 1020], 00:33:25.570 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1301], 00:33:25.570 | 99.00th=[ 1519], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1552], 00:33:25.570 | 99.99th=[ 1552] 00:33:25.570 bw ( KiB/s): min= 2048, max= 7936, per=3.05%, avg=6887.80, stdev=1721.58, samples=10 00:33:25.570 iops : min= 16, max= 62, avg=53.80, stdev=13.45, samples=10 00:33:25.570 lat (msec) : 50=0.74%, 100=43.13%, 250=6.35%, 500=1.92%, 750=4.28% 00:33:25.570 lat (msec) : 1000=15.66%, 2000=27.92% 00:33:25.570 cpu : usr=0.11%, sys=0.41%, ctx=397, majf=0, minf=1 00:33:25.570 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:33:25.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.570 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.570 issued rwts: total=351,326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.570 job25: (groupid=0, jobs=1): err= 0: pid=81843: Mon Jul 22 17:11:26 2024 00:33:25.570 read: IOPS=62, BW=7964KiB/s (8155kB/s)(43.2MiB/5561msec) 00:33:25.570 slat (usec): min=10, max=230, avg=31.85, stdev=29.29 00:33:25.570 clat (msec): min=27, max=605, avg=91.87, stdev=79.67 00:33:25.570 lat (msec): min=27, max=605, avg=91.90, stdev=79.67 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.570 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.570 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 155], 95.00th=[ 215], 00:33:25.570 | 99.00th=[ 567], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:33:25.570 | 99.99th=[ 609] 00:33:25.570 bw ( KiB/s): min= 6144, max=14364, per=3.81%, avg=8705.20, stdev=2418.42, samples=10 00:33:25.570 iops : min= 48, max= 112, avg=67.90, stdev=18.87, samples=10 00:33:25.570 write: IOPS=58, BW=7527KiB/s (7707kB/s)(40.9MiB/5561msec); 0 zone resets 00:33:25.570 slat (usec): min=13, max=1264, avg=41.38, stdev=72.74 00:33:25.570 clat (msec): min=272, max=1541, avg=988.95, stdev=175.79 00:33:25.570 lat (msec): min=273, max=1541, avg=988.99, stdev=175.78 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 351], 5.00th=[ 625], 10.00th=[ 776], 20.00th=[ 961], 00:33:25.570 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.570 | 70.00th=[ 1036], 80.00th=[ 1045], 90.00th=[ 1062], 95.00th=[ 1284], 00:33:25.570 | 99.00th=[ 1502], 99.50th=[ 1502], 99.90th=[ 1536], 99.95th=[ 1536], 00:33:25.570 | 99.99th=[ 1536] 00:33:25.570 bw ( KiB/s): min= 256, max= 7680, per=2.78%, avg=6282.64, stdev=2577.47, samples=11 00:33:25.570 iops : min= 2, max= 60, avg=49.00, stdev=20.10, samples=11 00:33:25.570 lat (msec) : 50=1.49%, 100=42.35%, 250=5.65%, 500=2.23%, 750=4.16% 00:33:25.570 lat (msec) : 1000=13.97%, 2000=30.16% 00:33:25.570 cpu : usr=0.07%, sys=0.38%, ctx=547, majf=0, minf=1 00:33:25.570 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:33:25.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.570 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.570 issued rwts: total=346,327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.570 job26: (groupid=0, jobs=1): err= 0: pid=81844: Mon Jul 22 17:11:26 2024 00:33:25.570 read: IOPS=54, BW=6923KiB/s (7089kB/s)(37.6MiB/5565msec) 00:33:25.570 slat (usec): min=10, max=173, avg=25.75, stdev=14.01 00:33:25.570 clat (msec): min=50, max=584, avg=90.91, stdev=61.81 00:33:25.570 lat (msec): min=50, max=584, avg=90.93, stdev=61.81 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 51], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 67], 00:33:25.570 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.570 | 70.00th=[ 73], 80.00th=[ 90], 90.00th=[ 159], 95.00th=[ 230], 00:33:25.570 | 99.00th=[ 262], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:33:25.570 | 99.99th=[ 584] 00:33:25.570 bw ( KiB/s): min= 5120, max=14592, per=3.35%, avg=7654.40, stdev=2575.46, samples=10 00:33:25.570 iops : min= 40, max= 114, avg=59.80, stdev=20.12, samples=10 00:33:25.570 write: IOPS=59, BW=7636KiB/s (7820kB/s)(41.5MiB/5565msec); 0 zone resets 00:33:25.570 slat (usec): min=14, max=268, avg=32.54, stdev=18.44 00:33:25.570 clat (msec): min=267, max=1539, avg=988.29, stdev=186.35 00:33:25.570 lat (msec): min=267, max=1539, avg=988.32, stdev=186.35 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 355], 5.00th=[ 642], 10.00th=[ 743], 20.00th=[ 919], 00:33:25.570 | 30.00th=[ 969], 40.00th=[ 1003], 50.00th=[ 1020], 60.00th=[ 1036], 00:33:25.570 | 70.00th=[ 1053], 80.00th=[ 1062], 90.00th=[ 1083], 95.00th=[ 1301], 00:33:25.570 | 99.00th=[ 1502], 99.50th=[ 1536], 99.90th=[ 1536], 99.95th=[ 1536], 00:33:25.570 | 99.99th=[ 1536] 00:33:25.570 bw ( KiB/s): min= 256, max= 7680, per=2.79%, avg=6306.91, stdev=2586.16, samples=11 00:33:25.570 iops : min= 2, max= 60, avg=49.27, stdev=20.20, samples=11 00:33:25.570 lat (msec) : 100=38.86%, 250=7.74%, 500=1.90%, 750=4.74%, 1000=14.69% 00:33:25.570 lat (msec) : 2000=32.07% 00:33:25.570 cpu : usr=0.09%, sys=0.31%, ctx=426, majf=0, minf=1 00:33:25.570 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:33:25.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.570 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.570 issued rwts: total=301,332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.570 job27: (groupid=0, jobs=1): err= 0: pid=81845: Mon Jul 22 17:11:26 2024 00:33:25.570 read: IOPS=60, BW=7689KiB/s (7873kB/s)(41.9MiB/5577msec) 00:33:25.570 slat (nsec): min=9193, max=79937, avg=25002.66, stdev=10684.06 00:33:25.570 clat (msec): min=17, max=619, avg=91.85, stdev=72.19 00:33:25.570 lat (msec): min=17, max=619, avg=91.87, stdev=72.19 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 28], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.570 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:33:25.570 | 70.00th=[ 73], 80.00th=[ 87], 90.00th=[ 169], 95.00th=[ 192], 00:33:25.570 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.570 | 99.99th=[ 617] 00:33:25.570 bw ( KiB/s): min= 256, max=16896, per=3.37%, avg=7701.82, stdev=4091.94, samples=11 00:33:25.570 iops : min= 2, max= 132, avg=60.09, stdev=31.97, samples=11 00:33:25.570 write: IOPS=58, BW=7528KiB/s (7709kB/s)(41.0MiB/5577msec); 0 zone resets 00:33:25.570 slat (usec): min=15, max=106, avg=32.46, stdev=13.55 00:33:25.570 clat (msec): min=264, max=1605, avg=992.27, stdev=184.05 00:33:25.570 lat (msec): min=264, max=1605, avg=992.30, stdev=184.05 00:33:25.570 clat percentiles (msec): 00:33:25.570 | 1.00th=[ 342], 5.00th=[ 634], 10.00th=[ 776], 20.00th=[ 936], 00:33:25.570 | 30.00th=[ 986], 40.00th=[ 1003], 50.00th=[ 1011], 60.00th=[ 1028], 00:33:25.570 | 70.00th=[ 1036], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1318], 00:33:25.570 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1603], 99.95th=[ 1603], 00:33:25.570 | 99.99th=[ 1603] 00:33:25.570 bw ( KiB/s): min= 2048, max= 7680, per=3.04%, avg=6884.90, stdev=1716.82, samples=10 00:33:25.570 iops : min= 16, max= 60, avg=53.70, stdev=13.40, samples=10 00:33:25.570 lat (msec) : 20=0.30%, 50=1.66%, 100=39.82%, 250=7.09%, 500=2.26% 00:33:25.570 lat (msec) : 750=3.92%, 1000=13.27%, 2000=31.67% 00:33:25.571 cpu : usr=0.09%, sys=0.34%, ctx=417, majf=0, minf=1 00:33:25.571 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:33:25.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.571 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.571 issued rwts: total=335,328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.571 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.571 job28: (groupid=0, jobs=1): err= 0: pid=81846: Mon Jul 22 17:11:26 2024 00:33:25.571 read: IOPS=68, BW=8772KiB/s (8982kB/s)(47.9MiB/5589msec) 00:33:25.571 slat (usec): min=8, max=239, avg=30.47, stdev=20.51 00:33:25.571 clat (msec): min=7, max=650, avg=87.15, stdev=75.41 00:33:25.571 lat (msec): min=7, max=650, avg=87.18, stdev=75.41 00:33:25.571 clat percentiles (msec): 00:33:25.571 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 58], 20.00th=[ 65], 00:33:25.571 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:33:25.571 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 136], 95.00th=[ 266], 00:33:25.571 | 99.00th=[ 317], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:33:25.571 | 99.99th=[ 651] 00:33:25.571 bw ( KiB/s): min= 255, max=17152, per=3.87%, avg=8843.55, stdev=4177.58, samples=11 00:33:25.571 iops : min= 1, max= 134, avg=69.00, stdev=32.84, samples=11 00:33:25.571 write: IOPS=59, BW=7558KiB/s (7739kB/s)(41.2MiB/5589msec); 0 zone resets 00:33:25.571 slat (usec): min=10, max=1053, avg=38.91, stdev=58.20 00:33:25.571 clat (msec): min=84, max=1585, avg=980.60, stdev=193.87 00:33:25.571 lat (msec): min=84, max=1585, avg=980.64, stdev=193.86 00:33:25.571 clat percentiles (msec): 00:33:25.571 | 1.00th=[ 326], 5.00th=[ 634], 10.00th=[ 735], 20.00th=[ 927], 00:33:25.571 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:33:25.571 | 70.00th=[ 1028], 80.00th=[ 1045], 90.00th=[ 1070], 95.00th=[ 1368], 00:33:25.571 | 99.00th=[ 1536], 99.50th=[ 1586], 99.90th=[ 1586], 99.95th=[ 1586], 00:33:25.571 | 99.99th=[ 1586] 00:33:25.571 bw ( KiB/s): min= 2304, max= 7680, per=3.06%, avg=6912.00, stdev=1636.98, samples=10 00:33:25.571 iops : min= 18, max= 60, avg=54.00, stdev=12.79, samples=10 00:33:25.571 lat (msec) : 10=0.98%, 20=1.40%, 50=0.98%, 100=43.62%, 250=3.23% 00:33:25.571 lat (msec) : 500=4.35%, 750=3.93%, 1000=16.55%, 2000=24.96% 00:33:25.571 cpu : usr=0.18%, sys=0.39%, ctx=407, majf=0, minf=1 00:33:25.571 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:33:25.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.571 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.571 issued rwts: total=383,330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.571 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.571 job29: (groupid=0, jobs=1): err= 0: pid=81847: Mon Jul 22 17:11:26 2024 00:33:25.571 read: IOPS=66, BW=8483KiB/s (8686kB/s)(46.0MiB/5553msec) 00:33:25.571 slat (usec): min=9, max=139, avg=32.01, stdev=16.54 00:33:25.571 clat (msec): min=48, max=615, avg=97.15, stdev=75.47 00:33:25.571 lat (msec): min=48, max=615, avg=97.18, stdev=75.47 00:33:25.571 clat percentiles (msec): 00:33:25.571 | 1.00th=[ 52], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:33:25.571 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:33:25.571 | 70.00th=[ 74], 80.00th=[ 114], 90.00th=[ 176], 95.00th=[ 230], 00:33:25.571 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:33:25.571 | 99.99th=[ 617] 00:33:25.571 bw ( KiB/s): min= 256, max=17920, per=3.69%, avg=8448.00, stdev=4411.82, samples=11 00:33:25.571 iops : min= 2, max= 140, avg=66.00, stdev=34.47, samples=11 00:33:25.571 write: IOPS=58, BW=7538KiB/s (7718kB/s)(40.9MiB/5553msec); 0 zone resets 00:33:25.571 slat (nsec): min=12165, max=91947, avg=36318.91, stdev=13919.50 00:33:25.571 clat (msec): min=267, max=1620, avg=975.44, stdev=189.97 00:33:25.571 lat (msec): min=267, max=1620, avg=975.47, stdev=189.97 00:33:25.571 clat percentiles (msec): 00:33:25.571 | 1.00th=[ 363], 5.00th=[ 634], 10.00th=[ 751], 20.00th=[ 860], 00:33:25.571 | 30.00th=[ 969], 40.00th=[ 986], 50.00th=[ 1003], 60.00th=[ 1020], 00:33:25.571 | 70.00th=[ 1028], 80.00th=[ 1053], 90.00th=[ 1083], 95.00th=[ 1318], 00:33:25.571 | 99.00th=[ 1536], 99.50th=[ 1586], 99.90th=[ 1620], 99.95th=[ 1620], 00:33:25.571 | 99.99th=[ 1620] 00:33:25.571 bw ( KiB/s): min= 1792, max= 7936, per=3.04%, avg=6886.40, stdev=1807.98, samples=10 00:33:25.571 iops : min= 14, max= 62, avg=53.80, stdev=14.12, samples=10 00:33:25.571 lat (msec) : 50=0.29%, 100=41.15%, 250=9.78%, 500=2.01%, 750=4.60% 00:33:25.571 lat (msec) : 1000=17.70%, 2000=24.46% 00:33:25.571 cpu : usr=0.18%, sys=0.43%, ctx=396, majf=0, minf=1 00:33:25.571 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:33:25.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.571 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:33:25.571 issued rwts: total=368,327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.571 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:25.571 00:33:25.571 Run status group 0 (all jobs): 00:33:25.571 READ: bw=223MiB/s (234MB/s), 6739KiB/s-8772KiB/s (6901kB/s-8982kB/s), io=1251MiB (1312MB), run=5530-5599msec 00:33:25.571 WRITE: bw=221MiB/s (232MB/s), 7516KiB/s-7636KiB/s (7696kB/s-7820kB/s), io=1237MiB (1297MB), run=5530-5599msec 00:33:25.571 00:33:25.571 Disk stats (read/write): 00:33:25.571 sda: ios=350/305, merge=0/0, ticks=24574/296218, in_queue=320792, util=89.58% 00:33:25.571 sdb: ios=346/305, merge=0/0, ticks=24568/297298, in_queue=321867, util=91.71% 00:33:25.571 sdc: ios=381/304, merge=0/0, ticks=28410/292664, in_queue=321075, util=91.36% 00:33:25.571 sdd: ios=359/304, merge=0/0, ticks=27189/293417, in_queue=320607, util=91.45% 00:33:25.571 sde: ios=378/304, merge=0/0, ticks=29684/291140, in_queue=320825, util=91.50% 00:33:25.571 sdf: ios=399/308, merge=0/0, ticks=29400/293401, in_queue=322801, util=92.21% 00:33:25.571 sdg: ios=374/304, merge=0/0, ticks=27097/293390, in_queue=320488, util=91.77% 00:33:25.571 sdh: ios=345/304, merge=0/0, ticks=27412/293064, in_queue=320477, util=92.64% 00:33:25.571 sdi: ios=326/306, merge=0/0, ticks=24203/297820, in_queue=322023, util=93.16% 00:33:25.571 sdj: ios=389/304, merge=0/0, ticks=32794/287759, in_queue=320553, util=92.83% 00:33:25.571 sdk: ios=378/304, merge=0/0, ticks=30282/290940, in_queue=321223, util=92.54% 00:33:25.571 sdl: ios=332/304, merge=0/0, ticks=28338/292719, in_queue=321057, util=92.99% 00:33:25.571 sdm: ios=395/304, merge=0/0, ticks=33517/286672, in_queue=320189, util=92.66% 00:33:25.571 sdn: ios=356/310, merge=0/0, ticks=26073/297624, in_queue=323697, util=93.88% 00:33:25.571 sdo: ios=320/306, merge=0/0, ticks=27639/294259, in_queue=321899, util=93.79% 00:33:25.571 sdp: ios=321/304, merge=0/0, ticks=28062/292415, in_queue=320477, util=94.06% 00:33:25.571 sdq: ios=372/308, merge=0/0, ticks=34605/288468, in_queue=323073, util=94.76% 00:33:25.571 sdr: ios=294/304, merge=0/0, ticks=26710/292350, in_queue=319060, util=94.40% 00:33:25.571 sds: ios=324/304, merge=0/0, ticks=26745/294111, in_queue=320856, util=94.87% 00:33:25.571 sdt: ios=366/304, merge=0/0, ticks=31930/289216, in_queue=321146, util=94.98% 00:33:25.571 sdu: ios=313/305, merge=0/0, ticks=25083/297541, in_queue=322625, util=95.75% 00:33:25.571 sdv: ios=310/304, merge=0/0, ticks=25916/295028, in_queue=320945, util=95.63% 00:33:25.571 sdw: ios=364/305, merge=0/0, ticks=28115/294301, in_queue=322416, util=96.04% 00:33:25.571 sdx: ios=300/305, merge=0/0, ticks=24686/296943, in_queue=321630, util=96.41% 00:33:25.571 sdy: ios=351/304, merge=0/0, ticks=28686/292580, in_queue=321267, util=96.07% 00:33:25.571 sdz: ios=346/304, merge=0/0, ticks=28630/292798, in_queue=321428, util=96.34% 00:33:25.571 sdaa: ios=301/304, merge=0/0, ticks=26298/294108, in_queue=320407, util=96.46% 00:33:25.571 sdab: ios=335/305, merge=0/0, ticks=28575/293454, in_queue=322030, util=96.78% 00:33:25.571 sdac: ios=383/306, merge=0/0, ticks=31667/290672, in_queue=322340, util=97.52% 00:33:25.571 sdad: ios=368/303, merge=0/0, ticks=33143/287207, in_queue=320350, util=97.38% 00:33:25.571 [2024-07-22 17:11:26.700716] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.706052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.709018] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.712041] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.715292] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.718389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.721796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.724056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [2024-07-22 17:11:26.726330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 17:11:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:33:25.571 [2024-07-22 17:11:26.728746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:25.571 [global] 00:33:25.571 thread=1 00:33:25.571 invalidate=1 00:33:25.571 rw=randwrite 00:33:25.571 time_based=1 00:33:25.571 runtime=10 00:33:25.571 ioengine=libaio 00:33:25.571 direct=1 00:33:25.571 bs=262144 00:33:25.571 iodepth=16 00:33:25.571 norandommap=1 00:33:25.571 numjobs=1 00:33:25.571 00:33:25.571 [job0] 00:33:25.571 filename=/dev/sda 00:33:25.571 [job1] 00:33:25.571 filename=/dev/sdb 00:33:25.571 [job2] 00:33:25.571 filename=/dev/sdc 00:33:25.571 [job3] 00:33:25.571 filename=/dev/sdd 00:33:25.571 [job4] 00:33:25.571 filename=/dev/sde 00:33:25.571 [job5] 00:33:25.571 filename=/dev/sdf 00:33:25.571 [job6] 00:33:25.571 filename=/dev/sdg 00:33:25.571 [job7] 00:33:25.571 filename=/dev/sdh 00:33:25.571 [job8] 00:33:25.571 filename=/dev/sdi 00:33:25.571 [job9] 00:33:25.571 filename=/dev/sdj 00:33:25.571 [job10] 00:33:25.571 filename=/dev/sdk 00:33:25.571 [job11] 00:33:25.571 filename=/dev/sdl 00:33:25.571 [job12] 00:33:25.571 filename=/dev/sdm 00:33:25.571 [job13] 00:33:25.571 filename=/dev/sdn 00:33:25.571 [job14] 00:33:25.571 filename=/dev/sdo 00:33:25.571 [job15] 00:33:25.571 filename=/dev/sdp 00:33:25.571 [job16] 00:33:25.571 filename=/dev/sdq 00:33:25.571 [job17] 00:33:25.571 filename=/dev/sdr 00:33:25.571 [job18] 00:33:25.571 filename=/dev/sds 00:33:25.571 [job19] 00:33:25.571 filename=/dev/sdt 00:33:25.571 [job20] 00:33:25.572 filename=/dev/sdu 00:33:25.572 [job21] 00:33:25.572 filename=/dev/sdv 00:33:25.572 [job22] 00:33:25.572 filename=/dev/sdw 00:33:25.572 [job23] 00:33:25.572 filename=/dev/sdx 00:33:25.572 [job24] 00:33:25.572 filename=/dev/sdy 00:33:25.572 [job25] 00:33:25.572 filename=/dev/sdz 00:33:25.572 [job26] 00:33:25.572 filename=/dev/sdaa 00:33:25.572 [job27] 00:33:25.572 filename=/dev/sdab 00:33:25.572 [job28] 00:33:25.572 filename=/dev/sdac 00:33:25.572 [job29] 00:33:25.572 filename=/dev/sdad 00:33:25.830 queue_depth set to 113 (sda) 00:33:25.831 queue_depth set to 113 (sdb) 00:33:25.831 queue_depth set to 113 (sdc) 00:33:25.831 queue_depth set to 113 (sdd) 00:33:25.831 queue_depth set to 113 (sde) 00:33:25.831 queue_depth set to 113 (sdf) 00:33:25.831 queue_depth set to 113 (sdg) 00:33:25.831 queue_depth set to 113 (sdh) 00:33:25.831 queue_depth set to 113 (sdi) 00:33:25.831 queue_depth set to 113 (sdj) 00:33:25.831 queue_depth set to 113 (sdk) 00:33:25.831 queue_depth set to 113 (sdl) 00:33:25.831 queue_depth set to 113 (sdm) 00:33:25.831 queue_depth set to 113 (sdn) 00:33:25.831 queue_depth set to 113 (sdo) 00:33:25.831 queue_depth set to 113 (sdp) 00:33:25.831 queue_depth set to 113 (sdq) 00:33:25.831 queue_depth set to 113 (sdr) 00:33:25.831 queue_depth set to 113 (sds) 00:33:25.831 queue_depth set to 113 (sdt) 00:33:25.831 queue_depth set to 113 (sdu) 00:33:25.831 queue_depth set to 113 (sdv) 00:33:25.831 queue_depth set to 113 (sdw) 00:33:25.831 queue_depth set to 113 (sdx) 00:33:25.831 queue_depth set to 113 (sdy) 00:33:25.831 queue_depth set to 113 (sdz) 00:33:25.831 queue_depth set to 113 (sdaa) 00:33:25.831 queue_depth set to 113 (sdab) 00:33:25.831 queue_depth set to 113 (sdac) 00:33:25.831 queue_depth set to 113 (sdad) 00:33:26.089 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.089 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:33:26.090 fio-3.35 00:33:26.090 Starting 30 threads 00:33:26.090 [2024-07-22 17:11:27.502745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.506842] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.510507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.513976] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.516654] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.519087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.521551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.524084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.526625] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.529000] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.531609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.534110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.536619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.539070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.541856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.547695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.550256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.552797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.555159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.558541] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.560832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.563138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.565634] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.568131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.570611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.573109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.575565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.578076] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.580495] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:26.090 [2024-07-22 17:11:27.582876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.361778] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.375916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.380938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.385040] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.387827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.390520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.393389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.397013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.400866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.404901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.407614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.411753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.415870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.419422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.422294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 [2024-07-22 17:11:38.425260] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.298 00:33:38.298 job0: (groupid=0, jobs=1): err= 0: pid=82350: Mon Jul 22 17:11:38 2024 00:33:38.298 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10261msec); 0 zone resets 00:33:38.298 slat (usec): min=23, max=212, avg=62.92, stdev=15.73 00:33:38.298 clat (msec): min=27, max=530, avg=297.42, stdev=34.53 00:33:38.298 lat (msec): min=27, max=530, avg=297.48, stdev=34.54 00:33:38.298 clat percentiles (msec): 00:33:38.298 | 1.00th=[ 122], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.298 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.298 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.298 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.298 | 99.99th=[ 531] 00:33:38.298 bw ( KiB/s): min=13312, max=14336, per=3.33%, avg=13722.90, stdev=313.45, samples=20 00:33:38.298 iops : min= 52, max= 56, avg=53.60, stdev= 1.23, samples=20 00:33:38.298 lat (msec) : 50=0.36%, 100=0.36%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.298 cpu : usr=0.16%, sys=0.31%, ctx=553, majf=0, minf=1 00:33:38.298 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.298 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.298 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.298 job1: (groupid=0, jobs=1): err= 0: pid=82351: Mon Jul 22 17:11:38 2024 00:33:38.298 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10266msec); 0 zone resets 00:33:38.298 slat (usec): min=19, max=129, avg=56.85, stdev=15.96 00:33:38.298 clat (msec): min=13, max=538, avg=297.01, stdev=37.49 00:33:38.298 lat (msec): min=13, max=538, avg=297.07, stdev=37.49 00:33:38.298 clat percentiles (msec): 00:33:38.298 | 1.00th=[ 101], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.298 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.298 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.298 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.298 | 99.99th=[ 542] 00:33:38.298 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13747.20, stdev=253.59, samples=20 00:33:38.298 iops : min= 51, max= 56, avg=53.65, stdev= 1.09, samples=20 00:33:38.298 lat (msec) : 20=0.18%, 50=0.36%, 100=0.54%, 250=1.27%, 500=97.10% 00:33:38.298 lat (msec) : 750=0.54% 00:33:38.298 cpu : usr=0.19%, sys=0.24%, ctx=552, majf=0, minf=1 00:33:38.298 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.298 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.298 issued rwts: total=0,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.298 job2: (groupid=0, jobs=1): err= 0: pid=82356: Mon Jul 22 17:11:38 2024 00:33:38.298 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10262msec); 0 zone resets 00:33:38.298 slat (usec): min=16, max=182, avg=58.57, stdev=18.52 00:33:38.298 clat (msec): min=28, max=530, avg=297.44, stdev=34.47 00:33:38.298 lat (msec): min=28, max=530, avg=297.50, stdev=34.47 00:33:38.298 clat percentiles (msec): 00:33:38.298 | 1.00th=[ 123], 5.00th=[ 292], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.298 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.298 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.298 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.298 | 99.99th=[ 531] 00:33:38.298 bw ( KiB/s): min=13312, max=14336, per=3.33%, avg=13721.60, stdev=315.18, samples=20 00:33:38.298 iops : min= 52, max= 56, avg=53.60, stdev= 1.23, samples=20 00:33:38.298 lat (msec) : 50=0.36%, 100=0.36%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.298 cpu : usr=0.17%, sys=0.27%, ctx=554, majf=0, minf=1 00:33:38.298 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.298 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job3: (groupid=0, jobs=1): err= 0: pid=82357: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10258msec); 0 zone resets 00:33:38.299 slat (usec): min=21, max=310, avg=70.98, stdev=31.56 00:33:38.299 clat (msec): min=33, max=520, avg=297.34, stdev=33.08 00:33:38.299 lat (msec): min=33, max=520, avg=297.41, stdev=33.08 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.299 | 99.99th=[ 523] 00:33:38.299 bw ( KiB/s): min=12825, max=14336, per=3.32%, avg=13693.00, stdev=360.01, samples=20 00:33:38.299 iops : min= 50, max= 56, avg=53.35, stdev= 1.35, samples=20 00:33:38.299 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.19%, sys=0.27%, ctx=571, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job4: (groupid=0, jobs=1): err= 0: pid=82381: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10256msec); 0 zone resets 00:33:38.299 slat (usec): min=26, max=371, avg=66.74, stdev=26.45 00:33:38.299 clat (msec): min=33, max=518, avg=297.27, stdev=32.92 00:33:38.299 lat (msec): min=33, max=518, avg=297.34, stdev=32.93 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.299 | 99.99th=[ 518] 00:33:38.299 bw ( KiB/s): min=12825, max=14364, per=3.32%, avg=13694.45, stdev=324.84, samples=20 00:33:38.299 iops : min= 50, max= 56, avg=53.35, stdev= 1.27, samples=20 00:33:38.299 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.20%, sys=0.21%, ctx=559, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job5: (groupid=0, jobs=1): err= 0: pid=82385: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=54, BW=13.5MiB/s (14.2MB/s)(139MiB/10273msec); 0 zone resets 00:33:38.299 slat (usec): min=21, max=428, avg=71.92, stdev=36.14 00:33:38.299 clat (msec): min=3, max=539, avg=295.59, stdev=42.54 00:33:38.299 lat (msec): min=3, max=539, avg=295.66, stdev=42.55 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 49], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.299 | 99.99th=[ 542] 00:33:38.299 bw ( KiB/s): min=13285, max=15329, per=3.35%, avg=13818.30, stdev=466.07, samples=20 00:33:38.299 iops : min= 51, max= 59, avg=53.80, stdev= 1.77, samples=20 00:33:38.299 lat (msec) : 4=0.18%, 10=0.18%, 20=0.36%, 50=0.36%, 100=0.36% 00:33:38.299 lat (msec) : 250=1.44%, 500=96.58%, 750=0.54% 00:33:38.299 cpu : usr=0.16%, sys=0.28%, ctx=579, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job6: (groupid=0, jobs=1): err= 0: pid=82391: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10261msec); 0 zone resets 00:33:38.299 slat (usec): min=23, max=363, avg=55.59, stdev=25.93 00:33:38.299 clat (msec): min=26, max=532, avg=297.42, stdev=34.84 00:33:38.299 lat (msec): min=26, max=532, avg=297.48, stdev=34.84 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 120], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 439], 99.50th=[ 498], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.299 | 99.99th=[ 531] 00:33:38.299 bw ( KiB/s): min=13312, max=14336, per=3.33%, avg=13721.60, stdev=356.28, samples=20 00:33:38.299 iops : min= 52, max= 56, avg=53.60, stdev= 1.39, samples=20 00:33:38.299 lat (msec) : 50=0.36%, 100=0.36%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.09%, sys=0.29%, ctx=591, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job7: (groupid=0, jobs=1): err= 0: pid=82392: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10259msec); 0 zone resets 00:33:38.299 slat (usec): min=19, max=565, avg=64.67, stdev=26.26 00:33:38.299 clat (msec): min=31, max=523, avg=297.35, stdev=33.50 00:33:38.299 lat (msec): min=31, max=523, avg=297.41, stdev=33.50 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 126], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 430], 99.50th=[ 489], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.299 | 99.99th=[ 523] 00:33:38.299 bw ( KiB/s): min=12800, max=14336, per=3.33%, avg=13717.45, stdev=393.94, samples=20 00:33:38.299 iops : min= 50, max= 56, avg=53.45, stdev= 1.61, samples=20 00:33:38.299 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.24%, sys=0.23%, ctx=553, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job8: (groupid=0, jobs=1): err= 0: pid=82393: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10258msec); 0 zone resets 00:33:38.299 slat (usec): min=29, max=308, avg=66.51, stdev=23.35 00:33:38.299 clat (msec): min=33, max=520, avg=297.32, stdev=33.16 00:33:38.299 lat (msec): min=33, max=520, avg=297.39, stdev=33.16 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.299 | 99.99th=[ 523] 00:33:38.299 bw ( KiB/s): min=12825, max=14336, per=3.32%, avg=13693.00, stdev=360.01, samples=20 00:33:38.299 iops : min= 50, max= 56, avg=53.35, stdev= 1.35, samples=20 00:33:38.299 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.19%, sys=0.26%, ctx=566, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.299 job9: (groupid=0, jobs=1): err= 0: pid=82398: Mon Jul 22 17:11:38 2024 00:33:38.299 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10257msec); 0 zone resets 00:33:38.299 slat (usec): min=20, max=350, avg=84.62, stdev=43.95 00:33:38.299 clat (msec): min=34, max=518, avg=297.30, stdev=32.76 00:33:38.299 lat (msec): min=34, max=518, avg=297.38, stdev=32.76 00:33:38.299 clat percentiles (msec): 00:33:38.299 | 1.00th=[ 130], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.299 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.299 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.299 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.299 | 99.99th=[ 518] 00:33:38.299 bw ( KiB/s): min=12800, max=14364, per=3.32%, avg=13693.20, stdev=328.39, samples=20 00:33:38.299 iops : min= 50, max= 56, avg=53.35, stdev= 1.27, samples=20 00:33:38.299 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.299 cpu : usr=0.19%, sys=0.31%, ctx=586, majf=0, minf=1 00:33:38.299 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.299 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job10: (groupid=0, jobs=1): err= 0: pid=82405: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.5MiB/s (14.1MB/s)(139MiB/10275msec); 0 zone resets 00:33:38.300 slat (usec): min=27, max=432, avg=63.69, stdev=21.62 00:33:38.300 clat (msec): min=7, max=537, avg=296.20, stdev=40.09 00:33:38.300 lat (msec): min=7, max=537, avg=296.26, stdev=40.09 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 72], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.300 | 99.99th=[ 542] 00:33:38.300 bw ( KiB/s): min=13285, max=14336, per=3.34%, avg=13767.25, stdev=284.62, samples=20 00:33:38.300 iops : min= 51, max= 56, avg=53.60, stdev= 1.23, samples=20 00:33:38.300 lat (msec) : 10=0.18%, 20=0.18%, 50=0.36%, 100=0.54%, 250=1.44% 00:33:38.300 lat (msec) : 500=96.93%, 750=0.36% 00:33:38.300 cpu : usr=0.21%, sys=0.25%, ctx=555, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job11: (groupid=0, jobs=1): err= 0: pid=82455: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10256msec); 0 zone resets 00:33:38.300 slat (usec): min=28, max=201, avg=56.63, stdev=24.59 00:33:38.300 clat (msec): min=34, max=518, avg=297.31, stdev=32.84 00:33:38.300 lat (msec): min=34, max=518, avg=297.36, stdev=32.84 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 129], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.300 | 99.99th=[ 518] 00:33:38.300 bw ( KiB/s): min=12825, max=14364, per=3.32%, avg=13694.45, stdev=324.84, samples=20 00:33:38.300 iops : min= 50, max= 56, avg=53.35, stdev= 1.27, samples=20 00:33:38.300 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.300 cpu : usr=0.10%, sys=0.27%, ctx=597, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job12: (groupid=0, jobs=1): err= 0: pid=82477: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10269msec); 0 zone resets 00:33:38.300 slat (usec): min=16, max=163, avg=61.11, stdev=16.89 00:33:38.300 clat (msec): min=15, max=532, avg=297.10, stdev=36.04 00:33:38.300 lat (msec): min=15, max=532, avg=297.16, stdev=36.05 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 109], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 439], 99.50th=[ 498], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.300 | 99.99th=[ 531] 00:33:38.300 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13720.25, stdev=317.08, samples=20 00:33:38.300 iops : min= 51, max= 56, avg=53.55, stdev= 1.32, samples=20 00:33:38.300 lat (msec) : 20=0.18%, 50=0.18%, 100=0.54%, 250=1.45%, 500=97.28% 00:33:38.300 lat (msec) : 750=0.36% 00:33:38.300 cpu : usr=0.15%, sys=0.30%, ctx=554, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job13: (groupid=0, jobs=1): err= 0: pid=82506: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10257msec); 0 zone resets 00:33:38.300 slat (usec): min=31, max=5743, avg=72.19, stdev=242.74 00:33:38.300 clat (msec): min=34, max=532, avg=297.66, stdev=33.93 00:33:38.300 lat (msec): min=40, max=532, avg=297.73, stdev=33.85 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 129], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 439], 99.50th=[ 498], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.300 | 99.99th=[ 531] 00:33:38.300 bw ( KiB/s): min=12825, max=14336, per=3.32%, avg=13693.05, stdev=321.97, samples=20 00:33:38.300 iops : min= 50, max= 56, avg=53.40, stdev= 1.27, samples=20 00:33:38.300 lat (msec) : 50=0.18%, 100=0.55%, 250=1.45%, 500=97.45%, 750=0.36% 00:33:38.300 cpu : usr=0.22%, sys=0.23%, ctx=557, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job14: (groupid=0, jobs=1): err= 0: pid=82517: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10261msec); 0 zone resets 00:33:38.300 slat (usec): min=18, max=136, avg=60.24, stdev=13.93 00:33:38.300 clat (msec): min=27, max=530, avg=297.42, stdev=34.53 00:33:38.300 lat (msec): min=27, max=530, avg=297.48, stdev=34.53 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 122], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.300 | 99.99th=[ 531] 00:33:38.300 bw ( KiB/s): min=13312, max=14336, per=3.33%, avg=13722.90, stdev=313.45, samples=20 00:33:38.300 iops : min= 52, max= 56, avg=53.60, stdev= 1.23, samples=20 00:33:38.300 lat (msec) : 50=0.36%, 100=0.36%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.300 cpu : usr=0.20%, sys=0.26%, ctx=554, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job15: (groupid=0, jobs=1): err= 0: pid=82550: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10256msec); 0 zone resets 00:33:38.300 slat (usec): min=25, max=622, avg=60.58, stdev=30.15 00:33:38.300 clat (msec): min=33, max=518, avg=297.28, stdev=32.90 00:33:38.300 lat (msec): min=33, max=518, avg=297.34, stdev=32.90 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.300 | 99.99th=[ 518] 00:33:38.300 bw ( KiB/s): min=12825, max=14364, per=3.32%, avg=13694.45, stdev=324.84, samples=20 00:33:38.300 iops : min= 50, max= 56, avg=53.35, stdev= 1.27, samples=20 00:33:38.300 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.300 cpu : usr=0.21%, sys=0.18%, ctx=554, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job16: (groupid=0, jobs=1): err= 0: pid=82551: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10261msec); 0 zone resets 00:33:38.300 slat (usec): min=30, max=134, avg=57.32, stdev=16.17 00:33:38.300 clat (msec): min=30, max=527, avg=297.42, stdev=33.91 00:33:38.300 lat (msec): min=30, max=527, avg=297.48, stdev=33.92 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 125], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.300 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.300 | 99.00th=[ 435], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:33:38.300 | 99.99th=[ 527] 00:33:38.300 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13717.35, stdev=311.37, samples=20 00:33:38.300 iops : min= 51, max= 56, avg=53.50, stdev= 1.24, samples=20 00:33:38.300 lat (msec) : 50=0.36%, 100=0.36%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.300 cpu : usr=0.23%, sys=0.20%, ctx=552, majf=0, minf=1 00:33:38.300 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.300 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.300 job17: (groupid=0, jobs=1): err= 0: pid=82552: Mon Jul 22 17:11:38 2024 00:33:38.300 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10266msec); 0 zone resets 00:33:38.300 slat (usec): min=27, max=614, avg=61.71, stdev=27.63 00:33:38.300 clat (msec): min=12, max=538, avg=296.99, stdev=37.54 00:33:38.300 lat (msec): min=12, max=538, avg=297.06, stdev=37.54 00:33:38.300 clat percentiles (msec): 00:33:38.300 | 1.00th=[ 100], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.300 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.301 | 99.99th=[ 542] 00:33:38.301 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13747.20, stdev=253.59, samples=20 00:33:38.301 iops : min= 51, max= 56, avg=53.65, stdev= 1.09, samples=20 00:33:38.301 lat (msec) : 20=0.18%, 50=0.36%, 100=0.54%, 250=1.27%, 500=97.10% 00:33:38.301 lat (msec) : 750=0.54% 00:33:38.301 cpu : usr=0.18%, sys=0.19%, ctx=563, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job18: (groupid=0, jobs=1): err= 0: pid=82553: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10255msec); 0 zone resets 00:33:38.301 slat (usec): min=22, max=139, avg=59.01, stdev=14.16 00:33:38.301 clat (msec): min=35, max=515, avg=297.26, stdev=32.56 00:33:38.301 lat (msec): min=35, max=516, avg=297.32, stdev=32.56 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 130], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.301 | 99.99th=[ 518] 00:33:38.301 bw ( KiB/s): min=12774, max=14336, per=3.32%, avg=13696.00, stdev=331.49, samples=20 00:33:38.301 iops : min= 49, max= 56, avg=53.40, stdev= 1.43, samples=20 00:33:38.301 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.64%, 750=0.18% 00:33:38.301 cpu : usr=0.17%, sys=0.28%, ctx=552, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job19: (groupid=0, jobs=1): err= 0: pid=82555: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10257msec); 0 zone resets 00:33:38.301 slat (usec): min=28, max=227, avg=54.66, stdev=21.34 00:33:38.301 clat (msec): min=33, max=533, avg=297.67, stdev=34.18 00:33:38.301 lat (msec): min=33, max=533, avg=297.73, stdev=34.18 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 443], 99.50th=[ 498], 99.90th=[ 535], 99.95th=[ 535], 00:33:38.301 | 99.99th=[ 535] 00:33:38.301 bw ( KiB/s): min=12800, max=14336, per=3.32%, avg=13691.80, stdev=325.55, samples=20 00:33:38.301 iops : min= 50, max= 56, avg=53.40, stdev= 1.27, samples=20 00:33:38.301 lat (msec) : 50=0.18%, 100=0.55%, 250=1.45%, 500=97.45%, 750=0.36% 00:33:38.301 cpu : usr=0.15%, sys=0.21%, ctx=597, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job20: (groupid=0, jobs=1): err= 0: pid=82556: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.5MiB/s (14.1MB/s)(139MiB/10277msec); 0 zone resets 00:33:38.301 slat (usec): min=20, max=192, avg=60.76, stdev=15.07 00:33:38.301 clat (msec): min=8, max=536, avg=296.24, stdev=39.79 00:33:38.301 lat (msec): min=8, max=536, avg=296.30, stdev=39.80 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 74], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 443], 99.50th=[ 502], 99.90th=[ 535], 99.95th=[ 535], 00:33:38.301 | 99.99th=[ 535] 00:33:38.301 bw ( KiB/s): min=13285, max=14336, per=3.34%, avg=13767.25, stdev=284.62, samples=20 00:33:38.301 iops : min= 51, max= 56, avg=53.60, stdev= 1.23, samples=20 00:33:38.301 lat (msec) : 10=0.18%, 20=0.18%, 50=0.36%, 100=0.54%, 250=1.44% 00:33:38.301 lat (msec) : 500=96.93%, 750=0.36% 00:33:38.301 cpu : usr=0.18%, sys=0.20%, ctx=555, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job21: (groupid=0, jobs=1): err= 0: pid=82557: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10264msec); 0 zone resets 00:33:38.301 slat (usec): min=18, max=624, avg=50.68, stdev=30.06 00:33:38.301 clat (msec): min=12, max=539, avg=296.95, stdev=38.01 00:33:38.301 lat (msec): min=13, max=539, avg=297.00, stdev=38.01 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 96], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.301 | 99.99th=[ 542] 00:33:38.301 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13745.85, stdev=253.08, samples=20 00:33:38.301 iops : min= 51, max= 56, avg=53.65, stdev= 1.09, samples=20 00:33:38.301 lat (msec) : 20=0.18%, 50=0.36%, 100=0.54%, 250=1.45%, 500=96.92% 00:33:38.301 lat (msec) : 750=0.54% 00:33:38.301 cpu : usr=0.16%, sys=0.19%, ctx=570, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job22: (groupid=0, jobs=1): err= 0: pid=82558: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.5MiB/s (14.1MB/s)(139MiB/10266msec); 0 zone resets 00:33:38.301 slat (usec): min=26, max=183, avg=48.94, stdev=14.14 00:33:38.301 clat (msec): min=6, max=541, avg=295.94, stdev=41.79 00:33:38.301 lat (msec): min=6, max=541, avg=295.99, stdev=41.80 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 58], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 451], 99.50th=[ 506], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.301 | 99.99th=[ 542] 00:33:38.301 bw ( KiB/s): min=13285, max=14848, per=3.35%, avg=13794.25, stdev=353.30, samples=20 00:33:38.301 iops : min= 51, max= 58, avg=53.75, stdev= 1.48, samples=20 00:33:38.301 lat (msec) : 10=0.18%, 20=0.36%, 50=0.36%, 100=0.54%, 250=1.44% 00:33:38.301 lat (msec) : 500=96.57%, 750=0.54% 00:33:38.301 cpu : usr=0.19%, sys=0.18%, ctx=559, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job23: (groupid=0, jobs=1): err= 0: pid=82559: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10258msec); 0 zone resets 00:33:38.301 slat (usec): min=24, max=633, avg=64.11, stdev=34.54 00:33:38.301 clat (msec): min=33, max=520, avg=297.30, stdev=33.23 00:33:38.301 lat (msec): min=33, max=520, avg=297.37, stdev=33.22 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 127], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.301 | 99.99th=[ 523] 00:33:38.301 bw ( KiB/s): min=12825, max=14307, per=3.32%, avg=13693.00, stdev=319.39, samples=20 00:33:38.301 iops : min= 50, max= 55, avg=53.35, stdev= 1.18, samples=20 00:33:38.301 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.301 cpu : usr=0.23%, sys=0.24%, ctx=556, majf=0, minf=1 00:33:38.301 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.301 job24: (groupid=0, jobs=1): err= 0: pid=82560: Mon Jul 22 17:11:38 2024 00:33:38.301 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10258msec); 0 zone resets 00:33:38.301 slat (usec): min=24, max=159, avg=52.10, stdev=17.21 00:33:38.301 clat (msec): min=32, max=521, avg=297.35, stdev=33.28 00:33:38.301 lat (msec): min=32, max=521, avg=297.40, stdev=33.28 00:33:38.301 clat percentiles (msec): 00:33:38.301 | 1.00th=[ 127], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.301 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.301 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.301 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.301 | 99.99th=[ 523] 00:33:38.301 bw ( KiB/s): min=12800, max=14336, per=3.32%, avg=13693.20, stdev=403.69, samples=20 00:33:38.301 iops : min= 50, max= 56, avg=53.35, stdev= 1.63, samples=20 00:33:38.301 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.302 cpu : usr=0.11%, sys=0.25%, ctx=563, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 job25: (groupid=0, jobs=1): err= 0: pid=82561: Mon Jul 22 17:11:38 2024 00:33:38.302 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10256msec); 0 zone resets 00:33:38.302 slat (usec): min=27, max=183, avg=53.95, stdev=20.01 00:33:38.302 clat (msec): min=33, max=518, avg=297.30, stdev=32.92 00:33:38.302 lat (msec): min=33, max=518, avg=297.35, stdev=32.92 00:33:38.302 clat percentiles (msec): 00:33:38.302 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.302 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.302 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.302 | 99.00th=[ 426], 99.50th=[ 481], 99.90th=[ 518], 99.95th=[ 518], 00:33:38.302 | 99.99th=[ 518] 00:33:38.302 bw ( KiB/s): min=12825, max=14364, per=3.32%, avg=13694.45, stdev=324.84, samples=20 00:33:38.302 iops : min= 50, max= 56, avg=53.35, stdev= 1.27, samples=20 00:33:38.302 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.302 cpu : usr=0.18%, sys=0.18%, ctx=582, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 job26: (groupid=0, jobs=1): err= 0: pid=82562: Mon Jul 22 17:11:38 2024 00:33:38.302 write: IOPS=53, BW=13.5MiB/s (14.1MB/s)(139MiB/10266msec); 0 zone resets 00:33:38.302 slat (usec): min=27, max=152, avg=62.20, stdev=15.09 00:33:38.302 clat (msec): min=4, max=541, avg=295.93, stdev=41.88 00:33:38.302 lat (msec): min=4, max=541, avg=295.99, stdev=41.88 00:33:38.302 clat percentiles (msec): 00:33:38.302 | 1.00th=[ 59], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.302 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.302 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.302 | 99.00th=[ 451], 99.50th=[ 506], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.302 | 99.99th=[ 542] 00:33:38.302 bw ( KiB/s): min=13285, max=14848, per=3.35%, avg=13794.25, stdev=353.30, samples=20 00:33:38.302 iops : min= 51, max= 58, avg=53.75, stdev= 1.48, samples=20 00:33:38.302 lat (msec) : 10=0.36%, 20=0.18%, 50=0.36%, 100=0.54%, 250=1.44% 00:33:38.302 lat (msec) : 500=96.57%, 750=0.54% 00:33:38.302 cpu : usr=0.24%, sys=0.24%, ctx=562, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 job27: (groupid=0, jobs=1): err= 0: pid=82563: Mon Jul 22 17:11:38 2024 00:33:38.302 write: IOPS=53, BW=13.5MiB/s (14.1MB/s)(139MiB/10275msec); 0 zone resets 00:33:38.302 slat (usec): min=27, max=3319, avg=57.65, stdev=140.52 00:33:38.302 clat (msec): min=4, max=538, avg=296.11, stdev=40.70 00:33:38.302 lat (msec): min=7, max=538, avg=296.17, stdev=40.66 00:33:38.302 clat percentiles (msec): 00:33:38.302 | 1.00th=[ 68], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.302 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.302 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.302 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:33:38.302 | 99.99th=[ 542] 00:33:38.302 bw ( KiB/s): min=13285, max=14364, per=3.34%, avg=13770.05, stdev=287.83, samples=20 00:33:38.302 iops : min= 51, max= 56, avg=53.65, stdev= 1.23, samples=20 00:33:38.302 lat (msec) : 10=0.18%, 20=0.36%, 50=0.36%, 100=0.36%, 250=1.44% 00:33:38.302 lat (msec) : 500=96.75%, 750=0.54% 00:33:38.302 cpu : usr=0.16%, sys=0.20%, ctx=567, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 job28: (groupid=0, jobs=1): err= 0: pid=82564: Mon Jul 22 17:11:38 2024 00:33:38.302 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10270msec); 0 zone resets 00:33:38.302 slat (usec): min=18, max=174, avg=61.84, stdev=17.34 00:33:38.302 clat (msec): min=16, max=530, avg=297.13, stdev=35.63 00:33:38.302 lat (msec): min=16, max=530, avg=297.19, stdev=35.63 00:33:38.302 clat percentiles (msec): 00:33:38.302 | 1.00th=[ 112], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.302 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.302 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.302 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 531], 00:33:38.302 | 99.99th=[ 531] 00:33:38.302 bw ( KiB/s): min=13285, max=14336, per=3.33%, avg=13720.25, stdev=270.08, samples=20 00:33:38.302 iops : min= 51, max= 56, avg=53.55, stdev= 1.15, samples=20 00:33:38.302 lat (msec) : 20=0.18%, 50=0.18%, 100=0.54%, 250=1.45%, 500=97.28% 00:33:38.302 lat (msec) : 750=0.36% 00:33:38.302 cpu : usr=0.19%, sys=0.22%, ctx=557, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 job29: (groupid=0, jobs=1): err= 0: pid=82565: Mon Jul 22 17:11:38 2024 00:33:38.302 write: IOPS=53, BW=13.4MiB/s (14.1MB/s)(138MiB/10258msec); 0 zone resets 00:33:38.302 slat (usec): min=21, max=288, avg=63.01, stdev=17.67 00:33:38.302 clat (msec): min=33, max=520, avg=297.33, stdev=33.14 00:33:38.302 lat (msec): min=33, max=520, avg=297.39, stdev=33.15 00:33:38.302 clat percentiles (msec): 00:33:38.302 | 1.00th=[ 128], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:33:38.302 | 30.00th=[ 296], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:33:38.302 | 70.00th=[ 300], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 313], 00:33:38.302 | 99.00th=[ 430], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:33:38.302 | 99.99th=[ 523] 00:33:38.302 bw ( KiB/s): min=12825, max=14307, per=3.32%, avg=13693.00, stdev=319.39, samples=20 00:33:38.302 iops : min= 50, max= 55, avg=53.35, stdev= 1.18, samples=20 00:33:38.302 lat (msec) : 50=0.18%, 100=0.54%, 250=1.45%, 500=97.46%, 750=0.36% 00:33:38.302 cpu : usr=0.23%, sys=0.24%, ctx=560, majf=0, minf=1 00:33:38.302 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=97.3%, 32=0.0%, >=64=0.0% 00:33:38.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.302 issued rwts: total=0,551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:38.302 00:33:38.302 Run status group 0 (all jobs): 00:33:38.302 WRITE: bw=403MiB/s (422MB/s), 13.4MiB/s-13.5MiB/s (14.1MB/s-14.2MB/s), io=4138MiB (4339MB), run=10255-10277msec 00:33:38.302 00:33:38.302 Disk stats (read/write): 00:33:38.302 sda: ios=48/544, merge=0/0, ticks=126/160192, in_queue=160318, util=95.56% 00:33:38.302 sdb: ios=48/546, merge=0/0, ticks=184/160451, in_queue=160634, util=95.96% 00:33:38.302 sdc: ios=48/544, merge=0/0, ticks=170/160211, in_queue=160381, util=96.04% 00:33:38.302 sdd: ios=48/543, merge=0/0, ticks=180/159979, in_queue=160158, util=96.09% 00:33:38.302 sde: ios=48/543, merge=0/0, ticks=178/159985, in_queue=160163, util=96.36% 00:33:38.302 sdf: ios=48/549, merge=0/0, ticks=214/160521, in_queue=160734, util=96.63% 00:33:38.302 sdg: ios=48/544, merge=0/0, ticks=172/160129, in_queue=160300, util=96.70% 00:33:38.302 sdh: ios=34/543, merge=0/0, ticks=169/159964, in_queue=160133, util=96.55% 00:33:38.302 sdi: ios=31/543, merge=0/0, ticks=167/159966, in_queue=160134, util=96.56% 00:33:38.302 sdj: ios=22/543, merge=0/0, ticks=131/160001, in_queue=160132, util=96.61% 00:33:38.302 sdk: ios=0/547, merge=0/0, ticks=0/160318, in_queue=160319, util=96.57% 00:33:38.302 sdl: ios=5/543, merge=0/0, ticks=18/159969, in_queue=159987, util=96.72% 00:33:38.302 sdm: ios=0/545, merge=0/0, ticks=0/160293, in_queue=160293, util=96.93% 00:33:38.302 sdn: ios=0/543, merge=0/0, ticks=0/160004, in_queue=160004, util=96.92% 00:33:38.302 sdo: ios=0/544, merge=0/0, ticks=0/160192, in_queue=160192, util=97.03% 00:33:38.302 sdp: ios=0/543, merge=0/0, ticks=0/159990, in_queue=159990, util=97.29% 00:33:38.302 sdq: ios=0/543, merge=0/0, ticks=0/159962, in_queue=159963, util=97.48% 00:33:38.302 sdr: ios=0/546, merge=0/0, ticks=0/160423, in_queue=160423, util=97.83% 00:33:38.302 sds: ios=0/543, merge=0/0, ticks=0/160025, in_queue=160025, util=97.69% 00:33:38.302 sdt: ios=0/543, merge=0/0, ticks=0/159944, in_queue=159944, util=97.91% 00:33:38.302 sdu: ios=0/547, merge=0/0, ticks=0/160337, in_queue=160338, util=98.23% 00:33:38.302 sdv: ios=0/546, merge=0/0, ticks=0/160346, in_queue=160345, util=98.32% 00:33:38.302 sdw: ios=0/548, merge=0/0, ticks=0/160353, in_queue=160354, util=98.41% 00:33:38.302 sdx: ios=0/543, merge=0/0, ticks=0/159978, in_queue=159978, util=98.28% 00:33:38.302 sdy: ios=0/543, merge=0/0, ticks=0/159946, in_queue=159946, util=98.25% 00:33:38.302 sdz: ios=0/543, merge=0/0, ticks=0/159980, in_queue=159980, util=98.34% 00:33:38.302 sdaa: ios=0/548, merge=0/0, ticks=0/160388, in_queue=160387, util=98.78% 00:33:38.302 sdab: ios=0/548, merge=0/0, ticks=0/160495, in_queue=160495, util=98.76% 00:33:38.302 sdac: ios=0/545, merge=0/0, ticks=0/160337, in_queue=160337, util=98.79% 00:33:38.302 sdad: ios=0/543, merge=0/0, ticks=0/159984, in_queue=159984, util=98.90% 00:33:38.302 [2024-07-22 17:11:38.427924] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.302 [2024-07-22 17:11:38.430790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.302 [2024-07-22 17:11:38.433517] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.302 [2024-07-22 17:11:38.441699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.445073] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.448074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.450889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:33:38.303 [2024-07-22 17:11:38.454251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.457543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.460640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.464320] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 [2024-07-22 17:11:38.467420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:33:38.303 [2024-07-22 17:11:38.470220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:33:38.303 [2024-07-22 17:11:38.473164] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:33:38.303 Cleaning up iSCSI connection 00:33:38.303 17:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:33:38.303 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:33:38.303 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:33:38.303 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:33:38.303 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:33:38.303 INFO: Removing lvol bdevs 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:33:38.303 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:33:38.304 [2024-07-22 17:11:39.515268] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (30419539-1223-4074-8543-cff1e2668e5c) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:38.304 INFO: lvol bdev lvs0/lbd_1 removed 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:33:38.304 [2024-07-22 17:11:39.807419] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0009660e-b0b6-473c-91ba-7f577060d198) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:38.304 INFO: lvol bdev lvs0/lbd_2 removed 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:33:38.304 17:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:33:38.561 [2024-07-22 17:11:40.099620] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4ee9a997-dfaa-4de5-884d-525354b9295a) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:38.561 INFO: lvol bdev lvs0/lbd_3 removed 00:33:38.561 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:33:38.561 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:38.561 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:33:38.561 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:33:38.818 [2024-07-22 17:11:40.347810] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cf0e9369-094b-487e-9754-f9935a13c191) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:38.818 INFO: lvol bdev lvs0/lbd_4 removed 00:33:38.818 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:33:38.818 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:38.818 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:33:38.818 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:33:39.076 [2024-07-22 17:11:40.587936] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0aae2362-d634-41c8-8e0a-d8774cedc53b) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:39.076 INFO: lvol bdev lvs0/lbd_5 removed 00:33:39.076 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:33:39.076 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:39.076 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:33:39.076 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:33:39.333 [2024-07-22 17:11:40.820101] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (07bfa119-590f-4a3f-8140-f112c29f3e9b) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:39.333 INFO: lvol bdev lvs0/lbd_6 removed 00:33:39.333 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:33:39.333 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:39.333 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:33:39.333 17:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:33:39.612 [2024-07-22 17:11:41.060185] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3cf7bac2-6844-4787-b399-0e88841d1171) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:39.612 INFO: lvol bdev lvs0/lbd_7 removed 00:33:39.612 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:33:39.612 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:39.612 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:33:39.612 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:33:39.869 [2024-07-22 17:11:41.292556] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7b19d1c9-ea89-452a-b98d-420e482a979f) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:39.869 INFO: lvol bdev lvs0/lbd_8 removed 00:33:39.869 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:33:39.869 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:39.869 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:33:39.869 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:33:40.127 [2024-07-22 17:11:41.536640] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f7465834-ec12-455b-b0a7-3d31ccfd4724) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:40.127 INFO: lvol bdev lvs0/lbd_9 removed 00:33:40.127 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:33:40.127 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:40.127 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:33:40.127 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:33:40.384 [2024-07-22 17:11:41.772873] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b65b887f-acf8-485b-a7fa-b114ed894919) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:40.384 INFO: lvol bdev lvs0/lbd_10 removed 00:33:40.384 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:33:40.384 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:40.384 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:33:40.384 17:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:33:40.642 [2024-07-22 17:11:42.045058] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6b7a10f8-161f-4391-a016-593ae2e7440a) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:40.642 INFO: lvol bdev lvs0/lbd_11 removed 00:33:40.642 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:33:40.642 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:40.642 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:33:40.642 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:33:40.900 [2024-07-22 17:11:42.277197] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (81b0a6f9-1f45-44f9-b875-50354d770354) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:40.900 INFO: lvol bdev lvs0/lbd_12 removed 00:33:40.900 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:33:40.900 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:40.900 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:33:40.900 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:33:40.900 [2024-07-22 17:11:42.509401] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6d6a605b-8731-4364-a600-760f6465bb68) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:41.158 INFO: lvol bdev lvs0/lbd_13 removed 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:33:41.158 [2024-07-22 17:11:42.749546] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (deacbcd7-52a8-46a6-a15f-5f36b3f10acd) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:41.158 INFO: lvol bdev lvs0/lbd_14 removed 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:33:41.158 17:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:33:41.416 [2024-07-22 17:11:42.973643] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (84ffd593-e214-45c0-9608-c0ea07e5b995) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:41.416 INFO: lvol bdev lvs0/lbd_15 removed 00:33:41.416 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:33:41.416 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:41.416 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:33:41.416 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:33:41.675 [2024-07-22 17:11:43.214721] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7c7bda96-111c-4e88-bdd0-06c9a3f4e489) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:41.675 INFO: lvol bdev lvs0/lbd_16 removed 00:33:41.675 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:33:41.675 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:41.675 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:33:41.675 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:33:41.934 [2024-07-22 17:11:43.458866] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (715c51ba-c719-4dea-854b-80a123ce44b4) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:41.934 INFO: lvol bdev lvs0/lbd_17 removed 00:33:41.934 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:33:41.934 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:41.934 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:33:41.934 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:33:42.192 [2024-07-22 17:11:43.686944] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f6924071-26a5-4a76-9b94-c39582462c27) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:42.192 INFO: lvol bdev lvs0/lbd_18 removed 00:33:42.192 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:33:42.192 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:42.192 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:33:42.192 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:33:42.474 [2024-07-22 17:11:43.939199] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ae9fac51-3c62-4805-8723-efb0eda2bd4e) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:42.474 INFO: lvol bdev lvs0/lbd_19 removed 00:33:42.474 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:33:42.474 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:42.474 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:33:42.474 17:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:33:42.733 [2024-07-22 17:11:44.167357] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (c5d0d8c5-cc9d-402f-95be-0d60bc59e5be) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:42.733 INFO: lvol bdev lvs0/lbd_20 removed 00:33:42.733 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:33:42.733 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:42.733 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:33:42.733 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:33:42.991 [2024-07-22 17:11:44.415510] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0bd7abc7-4d7d-4708-b87a-1328a4418d50) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:42.991 INFO: lvol bdev lvs0/lbd_21 removed 00:33:42.991 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:33:42.991 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:42.991 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:33:42.991 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:33:43.249 [2024-07-22 17:11:44.655712] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (47a92d6a-da30-4f85-9a90-2e7e27f35788) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:43.249 INFO: lvol bdev lvs0/lbd_22 removed 00:33:43.249 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:33:43.249 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:43.249 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:33:43.249 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:33:43.508 [2024-07-22 17:11:44.951858] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (06f463d4-c2ba-4c96-bd42-5ecf70a6e82d) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:43.508 INFO: lvol bdev lvs0/lbd_23 removed 00:33:43.508 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:33:43.508 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:43.508 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:33:43.508 17:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:33:43.767 [2024-07-22 17:11:45.200065] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a250665d-44b3-4405-95ab-01aba0e470b4) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:43.767 INFO: lvol bdev lvs0/lbd_24 removed 00:33:43.767 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:33:43.767 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:43.767 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:33:43.767 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:33:44.026 [2024-07-22 17:11:45.432183] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a6d6ee40-0e05-4d83-872f-c9552034dedf) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:44.026 INFO: lvol bdev lvs0/lbd_25 removed 00:33:44.026 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:33:44.026 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:44.026 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:33:44.026 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:33:44.285 [2024-07-22 17:11:45.664266] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (064f7869-2a34-42de-a697-10820b2621ce) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:44.285 INFO: lvol bdev lvs0/lbd_26 removed 00:33:44.285 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:33:44.285 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:44.285 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:33:44.285 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:33:44.543 [2024-07-22 17:11:45.952572] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (714ce6e8-aabb-4cd4-8965-4621ff8ab67b) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:44.543 INFO: lvol bdev lvs0/lbd_27 removed 00:33:44.543 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:33:44.543 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:44.543 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:33:44.543 17:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:33:44.802 [2024-07-22 17:11:46.192654] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d6ec458e-0a07-45ba-ae26-543872ba18d7) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:44.802 INFO: lvol bdev lvs0/lbd_28 removed 00:33:44.802 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:33:44.802 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:44.802 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:33:44.802 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:33:45.060 [2024-07-22 17:11:46.533050] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8245dccf-2e25-4c0c-a3ff-0bb91f098876) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:45.060 INFO: lvol bdev lvs0/lbd_29 removed 00:33:45.060 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:33:45.060 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:33:45.060 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:33:45.060 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:33:45.339 [2024-07-22 17:11:46.781172] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4e0d43b2-39ac-41a3-8f81-4442376c1907) received event(SPDK_BDEV_EVENT_REMOVE) 00:33:45.339 INFO: lvol bdev lvs0/lbd_30 removed 00:33:45.339 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:33:45.339 17:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:33:46.274 INFO: Removing lvol stores 00:33:46.274 17:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:33:46.274 17:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:46.533 INFO: lvol store lvs0 removed 00:33:46.533 INFO: Removing NVMe 00:33:46.533 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:33:46.533 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:33:46.533 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 80674 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 80674 ']' 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 80674 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.468 17:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80674 00:33:47.468 killing process with pid 80674 00:33:47.468 17:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:47.468 17:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:47.468 17:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80674' 00:33:47.468 17:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 80674 00:33:47.468 17:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 80674 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:33:50.006 00:33:50.006 real 0m54.028s 00:33:50.006 user 1m7.851s 00:33:50.006 sys 0m13.019s 00:33:50.006 ************************************ 00:33:50.006 END TEST iscsi_tgt_multiconnection 00:33:50.006 ************************************ 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:50.006 17:11:51 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:33:50.006 17:11:51 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:33:50.006 17:11:51 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:33:50.006 17:11:51 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:50.006 17:11:51 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:50.006 17:11:51 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:33:50.006 ************************************ 00:33:50.006 START TEST iscsi_tgt_ext4test 00:33:50.006 ************************************ 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:33:50.006 * Looking for test storage... 00:33:50.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:33:50.006 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=83128 00:33:50.007 Process pid: 83128 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 83128' 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 83128 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 83128 ']' 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:50.007 17:11:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:33:50.266 [2024-07-22 17:11:51.754963] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:50.266 [2024-07-22 17:11:51.755163] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83128 ] 00:33:50.525 [2024-07-22 17:11:51.928547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.783 [2024-07-22 17:11:52.271479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.350 17:11:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.350 17:11:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:33:51.350 17:11:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:33:51.350 17:11:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:33:52.726 17:11:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:52.726 17:11:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:33:52.985 17:11:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:33:53.927 Malloc0 00:33:53.927 iscsi_tgt is listening. Running tests... 00:33:53.927 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:33:53.927 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:33:53.927 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:53.927 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:33:53.927 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:33:54.185 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:33:54.443 17:11:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:33:54.702 true 00:33:54.702 17:11:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:33:54.960 17:11:56 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:33:55.895 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:33:55.895 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:33:55.895 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:33:56.154 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:33:56.154 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:33:56.154 [2024-07-22 17:11:57.540088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # true 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=0 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:33:56.154 Test error injection 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:33:56.154 17:11:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:33:56.413 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:33:56.671 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:33:56.671 mke2fs 1.46.5 (30-Dec-2021) 00:33:56.929 Discarding device blocks: 0/131072 done 00:33:57.188 Warning: could not erase sector 2: Input/output error 00:33:57.188 Creating filesystem with 131072 4k blocks and 32768 inodes 00:33:57.188 Filesystem UUID: 0d03f6fd-2d37-4d1d-9554-351c2e07631d 00:33:57.188 Superblock backups stored on blocks: 00:33:57.188 32768, 98304 00:33:57.188 00:33:57.188 Allocating group tables: 0/4 done 00:33:57.188 Warning: could not read block 0: Input/output error 00:33:57.447 Warning: could not erase sector 0: Input/output error 00:33:57.447 Writing inode tables: 0/4 done 00:33:57.447 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:33:57.447 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:33:57.447 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:33:57.447 17:11:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:33:57.447 [2024-07-22 17:11:58.921492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:58.464 17:11:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:33:58.464 mke2fs 1.46.5 (30-Dec-2021) 00:33:58.724 Discarding device blocks: 0/131072 done 00:33:58.724 Creating filesystem with 131072 4k blocks and 32768 inodes 00:33:58.724 Filesystem UUID: 9698224a-ea36-40fc-ba56-f5cc5472fad1 00:33:58.724 Superblock backups stored on blocks: 00:33:58.724 32768, 98304 00:33:58.724 00:33:58.724 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:33:58.724 done 00:33:58.983 Warning: could not read block 0: Input/output error 00:33:58.983 Warning: could not erase sector 0: Input/output error 00:33:58.983 Writing inode tables: 0/4 done 00:33:58.983 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:33:58.983 17:12:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:33:58.983 17:12:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:33:58.983 [2024-07-22 17:12:00.546510] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:33:58.983 17:12:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:00.360 17:12:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:00.360 mke2fs 1.46.5 (30-Dec-2021) 00:34:00.360 Discarding device blocks: 0/131072 done 00:34:00.360 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:00.360 Filesystem UUID: 4d871dcc-c0a3-4d39-9fb5-32fbd41f9fa5 00:34:00.360 Superblock backups stored on blocks: 00:34:00.360 32768, 98304 00:34:00.360 00:34:00.360 Allocating group tables: 0/4 done 00:34:00.360 Warning: could not erase sector 2: Input/output error 00:34:00.618 Warning: could not read block 0: Input/output error 00:34:00.618 Warning: could not erase sector 0: Input/output error 00:34:00.618 Writing inode tables: 0/4 done 00:34:00.618 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:00.618 17:12:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:34:00.618 17:12:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:34:00.618 17:12:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:00.618 [2024-07-22 17:12:02.174234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:01.995 17:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:01.995 mke2fs 1.46.5 (30-Dec-2021) 00:34:01.995 Discarding device blocks: 0/131072 done 00:34:02.253 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:02.253 Filesystem UUID: 1b23d46d-8af2-41a5-8631-9005fa9947cc 00:34:02.253 Superblock backups stored on blocks: 00:34:02.253 32768, 98304 00:34:02.253 00:34:02.253 Allocating group tables: 0/4 done 00:34:02.253 Warning: could not erase sector 2: Input/output error 00:34:02.253 Warning: could not read block 0: Input/output error 00:34:02.253 Warning: could not erase sector 0: Input/output error 00:34:02.253 Writing inode tables: 0/4 done 00:34:02.512 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:02.512 17:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:34:02.512 17:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:34:02.512 17:12:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:03.447 17:12:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:03.447 mke2fs 1.46.5 (30-Dec-2021) 00:34:03.706 Discarding device blocks: 0/131072 done 00:34:03.706 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:03.706 Filesystem UUID: 5f6ccec6-34ae-44eb-9bf0-3d793964915b 00:34:03.706 Superblock backups stored on blocks: 00:34:03.706 32768, 98304 00:34:03.706 00:34:03.706 Allocating group tables: 0/4 done 00:34:03.706 Warning: could not erase sector 2: Input/output error 00:34:03.965 Warning: could not read block 0: Input/output error 00:34:03.965 Warning: could not erase sector 0: Input/output error 00:34:03.965 Writing inode tables: 0/4 done 00:34:03.965 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:03.965 17:12:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:34:03.965 17:12:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:34:03.965 17:12:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:03.965 [2024-07-22 17:12:05.552314] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:05.342 17:12:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:05.342 mke2fs 1.46.5 (30-Dec-2021) 00:34:05.342 Discarding device blocks: 0/131072 done 00:34:05.342 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:05.342 Filesystem UUID: 90d0ed78-45b6-4aeb-8721-cb202eee67ae 00:34:05.342 Superblock backups stored on blocks: 00:34:05.342 32768, 98304 00:34:05.342 00:34:05.342 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:34:05.342 done 00:34:05.601 Warning: could not read block 0: Input/output error 00:34:05.601 Warning: could not erase sector 0: Input/output error 00:34:05.601 Writing inode tables: 0/4 done 00:34:05.601 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:05.601 17:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:34:05.601 17:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:34:05.601 17:12:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:05.601 [2024-07-22 17:12:07.179969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:06.976 17:12:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:06.976 mke2fs 1.46.5 (30-Dec-2021) 00:34:06.976 Discarding device blocks: 0/131072 done 00:34:07.234 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:07.234 Filesystem UUID: fc2121cd-3993-460a-9fa5-d7f03f265978 00:34:07.234 Superblock backups stored on blocks: 00:34:07.234 32768, 98304 00:34:07.234 00:34:07.234 Allocating group tables: Warning: could not erase sector 2: Input/output error 00:34:07.234 0/4 done 00:34:07.234 Warning: could not read block 0: Input/output error 00:34:07.234 Warning: could not erase sector 0: Input/output error 00:34:07.234 Writing inode tables: 0/4 done 00:34:07.493 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:07.493 17:12:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:34:07.493 17:12:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:34:07.493 17:12:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:07.493 [2024-07-22 17:12:08.905388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:08.429 17:12:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:08.429 mke2fs 1.46.5 (30-Dec-2021) 00:34:08.687 Discarding device blocks: 0/131072 done 00:34:08.687 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:08.687 Filesystem UUID: a9a32d00-fdd7-46f1-9877-530cb36462ad 00:34:08.687 Superblock backups stored on blocks: 00:34:08.687 32768, 98304 00:34:08.687 00:34:08.687 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:34:08.687 done 00:34:08.946 Warning: could not read block 0: Input/output error 00:34:08.946 Warning: could not erase sector 0: Input/output error 00:34:08.946 Writing inode tables: 0/4 done 00:34:08.946 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:08.946 17:12:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:34:08.946 17:12:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:34:08.946 17:12:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:08.946 [2024-07-22 17:12:10.530872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:10.324 17:12:11 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:10.324 mke2fs 1.46.5 (30-Dec-2021) 00:34:10.324 Discarding device blocks: 0/131072 done 00:34:10.324 Warning: could not erase sector 2: Input/output error 00:34:10.324 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:10.324 Filesystem UUID: 927cbc2a-c762-4cd0-af88-c10218470e45 00:34:10.324 Superblock backups stored on blocks: 00:34:10.324 32768, 98304 00:34:10.324 00:34:10.324 Allocating group tables: 0/4 done 00:34:10.583 Warning: could not read block 0: Input/output error 00:34:10.583 Warning: could not erase sector 0: Input/output error 00:34:10.583 Writing inode tables: 0/4 done 00:34:10.583 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:34:10.583 17:12:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:34:10.583 17:12:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:34:10.583 17:12:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:10.583 [2024-07-22 17:12:12.157740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:11.962 17:12:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:11.962 mke2fs 1.46.5 (30-Dec-2021) 00:34:11.962 Discarding device blocks: 0/131072 done 00:34:12.220 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:12.220 Filesystem UUID: 88178299-8c33-4178-a134-4683630f1a00 00:34:12.220 Superblock backups stored on blocks: 00:34:12.220 32768, 98304 00:34:12.220 00:34:12.220 Allocating group tables: 0/4 done 00:34:12.220 Warning: could not erase sector 2: Input/output error 00:34:12.220 Warning: could not read block 0: Input/output error 00:34:12.220 Writing inode tables: 0/4 done 00:34:12.220 Creating journal (4096 blocks): done 00:34:12.478 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:12.478 17:12:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:34:12.478 [2024-07-22 17:12:13.875616] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:12.478 17:12:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:34:12.478 17:12:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:13.413 17:12:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:13.413 mke2fs 1.46.5 (30-Dec-2021) 00:34:13.671 Discarding device blocks: 0/131072 done 00:34:13.671 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:13.671 Filesystem UUID: adf447c7-61d1-4bd0-b0bd-2b6be39d2c62 00:34:13.671 Superblock backups stored on blocks: 00:34:13.671 32768, 98304 00:34:13.671 00:34:13.671 Allocating group tables: 0/4 done 00:34:13.671 Writing inode tables: 0/4 done 00:34:13.671 Creating journal (4096 blocks): done 00:34:13.671 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:13.671 17:12:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:34:13.671 17:12:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:34:13.671 [2024-07-22 17:12:15.248383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:13.671 17:12:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:15.046 17:12:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:15.046 mke2fs 1.46.5 (30-Dec-2021) 00:34:15.046 Discarding device blocks: 0/131072 done 00:34:15.046 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:15.046 Filesystem UUID: e56aa173-73be-499a-8125-f22256601f1a 00:34:15.046 Superblock backups stored on blocks: 00:34:15.046 32768, 98304 00:34:15.046 00:34:15.046 Allocating group tables: 0/4 done 00:34:15.046 Writing inode tables: 0/4 done 00:34:15.046 Creating journal (4096 blocks): done 00:34:15.046 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:15.046 17:12:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:34:15.046 17:12:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:34:15.046 [2024-07-22 17:12:16.585155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:15.046 17:12:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:16.003 17:12:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:16.003 mke2fs 1.46.5 (30-Dec-2021) 00:34:16.261 Discarding device blocks: 0/131072 done 00:34:16.261 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:16.261 Filesystem UUID: 3a55a027-c5ba-4b7c-a353-e58f7b0ef2d0 00:34:16.261 Superblock backups stored on blocks: 00:34:16.261 32768, 98304 00:34:16.261 00:34:16.261 Allocating group tables: 0/4 done 00:34:16.261 Writing inode tables: 0/4 done 00:34:16.261 Creating journal (4096 blocks): done 00:34:16.520 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:16.520 17:12:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:34:16.520 [2024-07-22 17:12:17.919174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:16.520 17:12:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:34:16.520 17:12:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:17.454 17:12:18 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:17.454 mke2fs 1.46.5 (30-Dec-2021) 00:34:17.712 Discarding device blocks: 0/131072 done 00:34:17.712 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:17.712 Filesystem UUID: d6fb9ce8-42f0-4110-a9ba-c796b8cc9bbf 00:34:17.712 Superblock backups stored on blocks: 00:34:17.712 32768, 98304 00:34:17.712 00:34:17.712 Allocating group tables: 0/4 done 00:34:17.712 Writing inode tables: 0/4 done 00:34:17.712 Creating journal (4096 blocks): done 00:34:17.712 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:17.712 17:12:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:34:17.712 17:12:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:34:17.712 [2024-07-22 17:12:19.282235] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:17.712 17:12:19 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:19.087 17:12:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:19.087 mke2fs 1.46.5 (30-Dec-2021) 00:34:19.087 Discarding device blocks: 0/131072 done 00:34:19.087 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:19.087 Filesystem UUID: c26008e8-87fe-42de-be75-c6ee9f5c6942 00:34:19.087 Superblock backups stored on blocks: 00:34:19.087 32768, 98304 00:34:19.087 00:34:19.087 Allocating group tables: 0/4 done 00:34:19.087 Writing inode tables: 0/4 done 00:34:19.087 Creating journal (4096 blocks): done 00:34:19.087 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:19.087 17:12:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:34:19.087 17:12:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:34:19.087 [2024-07-22 17:12:20.655308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:19.087 17:12:20 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:34:20.464 17:12:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:20.464 mke2fs 1.46.5 (30-Dec-2021) 00:34:20.464 Discarding device blocks: 0/131072 done 00:34:20.464 Creating filesystem with 131072 4k blocks and 32768 inodes 00:34:20.464 Filesystem UUID: 529b1fcb-f7bc-4611-9644-74519c1ca242 00:34:20.464 Superblock backups stored on blocks: 00:34:20.464 32768, 98304 00:34:20.464 00:34:20.464 Allocating group tables: 0/4 done 00:34:20.464 Writing inode tables: 0/4 done 00:34:20.464 Creating journal (4096 blocks): done 00:34:20.464 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:34:20.464 mkfs failed as expected 00:34:20.464 Cleaning up iSCSI connection 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:34:20.464 [2024-07-22 17:12:22.024732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:34:20.464 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:34:20.464 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:34:20.464 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:34:21.031 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:34:21.289 Error injection test done 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bdev_name=Nvme0n1 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # local bs 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # local nb 00:34:21.289 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:34:21.547 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:21.547 { 00:34:21.547 "name": "Nvme0n1", 00:34:21.547 "aliases": [ 00:34:21.547 "d2412cdd-39c7-446d-bd82-c761f4352a33" 00:34:21.547 ], 00:34:21.547 "product_name": "NVMe disk", 00:34:21.547 "block_size": 4096, 00:34:21.547 "num_blocks": 1310720, 00:34:21.547 "uuid": "d2412cdd-39c7-446d-bd82-c761f4352a33", 00:34:21.547 "assigned_rate_limits": { 00:34:21.547 "rw_ios_per_sec": 0, 00:34:21.547 "rw_mbytes_per_sec": 0, 00:34:21.547 "r_mbytes_per_sec": 0, 00:34:21.547 "w_mbytes_per_sec": 0 00:34:21.547 }, 00:34:21.547 "claimed": false, 00:34:21.547 "zoned": false, 00:34:21.547 "supported_io_types": { 00:34:21.547 "read": true, 00:34:21.547 "write": true, 00:34:21.547 "unmap": true, 00:34:21.547 "flush": true, 00:34:21.547 "reset": true, 00:34:21.547 "nvme_admin": true, 00:34:21.547 "nvme_io": true, 00:34:21.547 "nvme_io_md": false, 00:34:21.547 "write_zeroes": true, 00:34:21.547 "zcopy": false, 00:34:21.547 "get_zone_info": false, 00:34:21.547 "zone_management": false, 00:34:21.547 "zone_append": false, 00:34:21.547 "compare": true, 00:34:21.547 "compare_and_write": false, 00:34:21.547 "abort": true, 00:34:21.547 "seek_hole": false, 00:34:21.547 "seek_data": false, 00:34:21.547 "copy": true, 00:34:21.547 "nvme_iov_md": false 00:34:21.547 }, 00:34:21.547 "driver_specific": { 00:34:21.547 "nvme": [ 00:34:21.547 { 00:34:21.547 "pci_address": "0000:00:10.0", 00:34:21.547 "trid": { 00:34:21.547 "trtype": "PCIe", 00:34:21.547 "traddr": "0000:00:10.0" 00:34:21.547 }, 00:34:21.547 "ctrlr_data": { 00:34:21.547 "cntlid": 0, 00:34:21.547 "vendor_id": "0x1b36", 00:34:21.547 "model_number": "QEMU NVMe Ctrl", 00:34:21.547 "serial_number": "12340", 00:34:21.547 "firmware_revision": "8.0.0", 00:34:21.547 "subnqn": "nqn.2019-08.org.qemu:12340", 00:34:21.547 "oacs": { 00:34:21.547 "security": 0, 00:34:21.547 "format": 1, 00:34:21.547 "firmware": 0, 00:34:21.547 "ns_manage": 1 00:34:21.547 }, 00:34:21.547 "multi_ctrlr": false, 00:34:21.547 "ana_reporting": false 00:34:21.547 }, 00:34:21.547 "vs": { 00:34:21.547 "nvme_version": "1.4" 00:34:21.547 }, 00:34:21.547 "ns_data": { 00:34:21.547 "id": 1, 00:34:21.547 "can_share": false 00:34:21.547 } 00:34:21.547 } 00:34:21.547 ], 00:34:21.547 "mp_policy": "active_passive" 00:34:21.547 } 00:34:21.547 } 00:34:21.547 ]' 00:34:21.547 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:21.547 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # bs=4096 00:34:21.547 17:12:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # nb=1310720 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1388 -- # echo 5120 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:34:21.547 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:34:21.806 Nvme0n1p0 Nvme0n1p1 00:34:21.806 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:34:22.064 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:34:22.064 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:34:22.064 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:34:22.064 [2024-07-22 17:12:23.616050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:34:22.064 17:12:23 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:34:22.064 mke2fs 1.46.5 (30-Dec-2021) 00:34:22.064 Discarding device blocks: 0/655360 done 00:34:22.064 Creating filesystem with 655360 4k blocks and 163840 inodes 00:34:22.064 Filesystem UUID: 821511c1-3d80-4204-8c47-e3f3d8381f51 00:34:22.064 Superblock backups stored on blocks: 00:34:22.064 32768, 98304, 163840, 229376, 294912 00:34:22.064 00:34:22.064 Allocating group tables: 0/20 done 00:34:22.064 Writing inode tables: 0/20 done 00:34:22.630 Creating journal (16384 blocks): done 00:34:22.630 Writing superblocks and filesystem accounting information: 0/20 done 00:34:22.630 00:34:22.630 17:12:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:34:22.630 17:12:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:34:22.630 17:12:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:34:22.630 17:12:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:35:59.155 17:13:59 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:35:59.155 make: Entering directory '/mnt/sdadir/spdk' 00:36:55.370 make[1]: Nothing to be done for 'clean'. 00:36:55.370 make: Leaving directory '/mnt/sdadir/spdk' 00:36:55.370 17:14:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:36:55.370 17:14:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:36:55.370 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:36:55.370 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:37:13.474 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:37:35.400 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:37:35.400 Creating mk/config.mk...done. 00:37:35.400 Creating mk/cc.flags.mk...done. 00:37:35.400 Type 'make' to build. 00:37:35.400 17:15:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:37:35.400 make: Entering directory '/mnt/sdadir/spdk' 00:37:35.400 make[1]: Nothing to be done for 'all'. 00:38:07.475 The Meson build system 00:38:07.475 Version: 1.3.1 00:38:07.475 Source dir: /mnt/sdadir/spdk/dpdk 00:38:07.475 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:38:07.475 Build type: native build 00:38:07.475 Program cat found: YES (/usr/bin/cat) 00:38:07.475 Project name: DPDK 00:38:07.475 Project version: 24.03.0 00:38:07.475 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:38:07.475 C linker for the host machine: cc ld.bfd 2.39-16 00:38:07.475 Host machine cpu family: x86_64 00:38:07.475 Host machine cpu: x86_64 00:38:07.475 Program pkg-config found: YES (/usr/bin/pkg-config) 00:38:07.475 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:38:07.475 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:38:07.475 Program python3 found: YES (/usr/bin/python3) 00:38:07.475 Program cat found: YES (/usr/bin/cat) 00:38:07.475 Compiler for C supports arguments -march=native: YES 00:38:07.475 Checking for size of "void *" : 8 00:38:07.475 Checking for size of "void *" : 8 (cached) 00:38:07.475 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:38:07.475 Library m found: YES 00:38:07.475 Library numa found: YES 00:38:07.475 Has header "numaif.h" : YES 00:38:07.475 Library fdt found: NO 00:38:07.475 Library execinfo found: NO 00:38:07.475 Has header "execinfo.h" : YES 00:38:07.475 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:38:07.475 Run-time dependency libarchive found: NO (tried pkgconfig) 00:38:07.475 Run-time dependency libbsd found: NO (tried pkgconfig) 00:38:07.475 Run-time dependency jansson found: NO (tried pkgconfig) 00:38:07.475 Run-time dependency openssl found: YES 3.0.9 00:38:07.475 Run-time dependency libpcap found: YES 1.10.4 00:38:07.475 Has header "pcap.h" with dependency libpcap: YES 00:38:07.475 Compiler for C supports arguments -Wcast-qual: YES 00:38:07.475 Compiler for C supports arguments -Wdeprecated: YES 00:38:07.475 Compiler for C supports arguments -Wformat: YES 00:38:07.475 Compiler for C supports arguments -Wformat-nonliteral: YES 00:38:07.475 Compiler for C supports arguments -Wformat-security: YES 00:38:07.475 Compiler for C supports arguments -Wmissing-declarations: YES 00:38:07.475 Compiler for C supports arguments -Wmissing-prototypes: YES 00:38:07.475 Compiler for C supports arguments -Wnested-externs: YES 00:38:07.475 Compiler for C supports arguments -Wold-style-definition: YES 00:38:07.475 Compiler for C supports arguments -Wpointer-arith: YES 00:38:07.475 Compiler for C supports arguments -Wsign-compare: YES 00:38:07.475 Compiler for C supports arguments -Wstrict-prototypes: YES 00:38:07.475 Compiler for C supports arguments -Wundef: YES 00:38:07.475 Compiler for C supports arguments -Wwrite-strings: YES 00:38:07.475 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:38:07.475 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:38:07.475 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:38:07.475 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:38:07.475 Program objdump found: YES (/usr/bin/objdump) 00:38:07.475 Compiler for C supports arguments -mavx512f: YES 00:38:07.475 Checking if "AVX512 checking" compiles: YES 00:38:07.475 Fetching value of define "__SSE4_2__" : 1 00:38:07.475 Fetching value of define "__AES__" : 1 00:38:07.475 Fetching value of define "__AVX__" : 1 00:38:07.475 Fetching value of define "__AVX2__" : 1 00:38:07.475 Fetching value of define "__AVX512BW__" : (undefined) 00:38:07.475 Fetching value of define "__AVX512CD__" : (undefined) 00:38:07.475 Fetching value of define "__AVX512DQ__" : (undefined) 00:38:07.475 Fetching value of define "__AVX512F__" : (undefined) 00:38:07.475 Fetching value of define "__AVX512VL__" : (undefined) 00:38:07.475 Fetching value of define "__PCLMUL__" : 1 00:38:07.475 Fetching value of define "__RDRND__" : 1 00:38:07.475 Fetching value of define "__RDSEED__" : 1 00:38:07.475 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:38:07.475 Fetching value of define "__znver1__" : (undefined) 00:38:07.475 Fetching value of define "__znver2__" : (undefined) 00:38:07.475 Fetching value of define "__znver3__" : (undefined) 00:38:07.475 Fetching value of define "__znver4__" : (undefined) 00:38:07.475 Compiler for C supports arguments -Wno-format-truncation: YES 00:38:07.475 Checking for function "getentropy" : NO 00:38:07.475 Fetching value of define "__PCLMUL__" : 1 (cached) 00:38:07.475 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:38:07.475 Compiler for C supports arguments -mpclmul: YES 00:38:07.475 Compiler for C supports arguments -maes: YES 00:38:07.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:38:07.475 Compiler for C supports arguments -mavx512bw: YES 00:38:07.475 Compiler for C supports arguments -mavx512dq: YES 00:38:07.475 Compiler for C supports arguments -mavx512vl: YES 00:38:07.475 Compiler for C supports arguments -mvpclmulqdq: YES 00:38:07.475 Compiler for C supports arguments -mavx2: YES 00:38:07.475 Compiler for C supports arguments -mavx: YES 00:38:07.475 Compiler for C supports arguments -Wno-cast-qual: YES 00:38:07.475 Has header "linux/userfaultfd.h" : YES 00:38:07.475 Has header "linux/vduse.h" : YES 00:38:07.475 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:38:07.475 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:38:07.475 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:38:07.475 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:38:07.475 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:38:07.475 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:38:07.475 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:38:07.475 Program doxygen found: YES (/usr/bin/doxygen) 00:38:07.475 Configuring doxy-api-html.conf using configuration 00:38:07.475 Configuring doxy-api-man.conf using configuration 00:38:07.475 Program mandb found: YES (/usr/bin/mandb) 00:38:07.475 Program sphinx-build found: NO 00:38:07.475 Configuring rte_build_config.h using configuration 00:38:07.475 Message: 00:38:07.475 ================= 00:38:07.475 Applications Enabled 00:38:07.475 ================= 00:38:07.475 00:38:07.475 apps: 00:38:07.475 00:38:07.475 00:38:07.475 Message: 00:38:07.475 ================= 00:38:07.475 Libraries Enabled 00:38:07.475 ================= 00:38:07.475 00:38:07.475 libs: 00:38:07.475 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:38:07.475 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:38:07.475 cryptodev, dmadev, power, reorder, security, vhost, 00:38:07.475 00:38:07.475 Message: 00:38:07.475 =============== 00:38:07.475 Drivers Enabled 00:38:07.475 =============== 00:38:07.475 00:38:07.475 common: 00:38:07.475 00:38:07.475 bus: 00:38:07.475 pci, vdev, 00:38:07.475 mempool: 00:38:07.475 ring, 00:38:07.475 dma: 00:38:07.475 00:38:07.475 net: 00:38:07.475 00:38:07.475 crypto: 00:38:07.475 00:38:07.475 compress: 00:38:07.475 00:38:07.475 vdpa: 00:38:07.475 00:38:07.475 00:38:07.475 Message: 00:38:07.475 ================= 00:38:07.475 Content Skipped 00:38:07.475 ================= 00:38:07.475 00:38:07.475 apps: 00:38:07.475 dumpcap: explicitly disabled via build config 00:38:07.475 graph: explicitly disabled via build config 00:38:07.475 pdump: explicitly disabled via build config 00:38:07.475 proc-info: explicitly disabled via build config 00:38:07.475 test-acl: explicitly disabled via build config 00:38:07.475 test-bbdev: explicitly disabled via build config 00:38:07.475 test-cmdline: explicitly disabled via build config 00:38:07.475 test-compress-perf: explicitly disabled via build config 00:38:07.475 test-crypto-perf: explicitly disabled via build config 00:38:07.475 test-dma-perf: explicitly disabled via build config 00:38:07.475 test-eventdev: explicitly disabled via build config 00:38:07.475 test-fib: explicitly disabled via build config 00:38:07.475 test-flow-perf: explicitly disabled via build config 00:38:07.475 test-gpudev: explicitly disabled via build config 00:38:07.475 test-mldev: explicitly disabled via build config 00:38:07.475 test-pipeline: explicitly disabled via build config 00:38:07.475 test-pmd: explicitly disabled via build config 00:38:07.475 test-regex: explicitly disabled via build config 00:38:07.475 test-sad: explicitly disabled via build config 00:38:07.475 test-security-perf: explicitly disabled via build config 00:38:07.475 00:38:07.475 libs: 00:38:07.475 argparse: explicitly disabled via build config 00:38:07.475 metrics: explicitly disabled via build config 00:38:07.475 acl: explicitly disabled via build config 00:38:07.475 bbdev: explicitly disabled via build config 00:38:07.475 bitratestats: explicitly disabled via build config 00:38:07.475 bpf: explicitly disabled via build config 00:38:07.475 cfgfile: explicitly disabled via build config 00:38:07.475 distributor: explicitly disabled via build config 00:38:07.475 efd: explicitly disabled via build config 00:38:07.475 eventdev: explicitly disabled via build config 00:38:07.475 dispatcher: explicitly disabled via build config 00:38:07.475 gpudev: explicitly disabled via build config 00:38:07.475 gro: explicitly disabled via build config 00:38:07.475 gso: explicitly disabled via build config 00:38:07.475 ip_frag: explicitly disabled via build config 00:38:07.475 jobstats: explicitly disabled via build config 00:38:07.475 latencystats: explicitly disabled via build config 00:38:07.475 lpm: explicitly disabled via build config 00:38:07.475 member: explicitly disabled via build config 00:38:07.475 pcapng: explicitly disabled via build config 00:38:07.475 rawdev: explicitly disabled via build config 00:38:07.475 regexdev: explicitly disabled via build config 00:38:07.475 mldev: explicitly disabled via build config 00:38:07.475 rib: explicitly disabled via build config 00:38:07.475 sched: explicitly disabled via build config 00:38:07.475 stack: explicitly disabled via build config 00:38:07.475 ipsec: explicitly disabled via build config 00:38:07.475 pdcp: explicitly disabled via build config 00:38:07.475 fib: explicitly disabled via build config 00:38:07.475 port: explicitly disabled via build config 00:38:07.475 pdump: explicitly disabled via build config 00:38:07.475 table: explicitly disabled via build config 00:38:07.475 pipeline: explicitly disabled via build config 00:38:07.475 graph: explicitly disabled via build config 00:38:07.475 node: explicitly disabled via build config 00:38:07.475 00:38:07.475 drivers: 00:38:07.475 common/cpt: not in enabled drivers build config 00:38:07.475 common/dpaax: not in enabled drivers build config 00:38:07.475 common/iavf: not in enabled drivers build config 00:38:07.475 common/idpf: not in enabled drivers build config 00:38:07.475 common/ionic: not in enabled drivers build config 00:38:07.475 common/mvep: not in enabled drivers build config 00:38:07.475 common/octeontx: not in enabled drivers build config 00:38:07.475 bus/auxiliary: not in enabled drivers build config 00:38:07.475 bus/cdx: not in enabled drivers build config 00:38:07.475 bus/dpaa: not in enabled drivers build config 00:38:07.475 bus/fslmc: not in enabled drivers build config 00:38:07.475 bus/ifpga: not in enabled drivers build config 00:38:07.475 bus/platform: not in enabled drivers build config 00:38:07.475 bus/uacce: not in enabled drivers build config 00:38:07.475 bus/vmbus: not in enabled drivers build config 00:38:07.475 common/cnxk: not in enabled drivers build config 00:38:07.475 common/mlx5: not in enabled drivers build config 00:38:07.475 common/nfp: not in enabled drivers build config 00:38:07.475 common/nitrox: not in enabled drivers build config 00:38:07.475 common/qat: not in enabled drivers build config 00:38:07.475 common/sfc_efx: not in enabled drivers build config 00:38:07.475 mempool/bucket: not in enabled drivers build config 00:38:07.475 mempool/cnxk: not in enabled drivers build config 00:38:07.475 mempool/dpaa: not in enabled drivers build config 00:38:07.475 mempool/dpaa2: not in enabled drivers build config 00:38:07.475 mempool/octeontx: not in enabled drivers build config 00:38:07.475 mempool/stack: not in enabled drivers build config 00:38:07.475 dma/cnxk: not in enabled drivers build config 00:38:07.475 dma/dpaa: not in enabled drivers build config 00:38:07.475 dma/dpaa2: not in enabled drivers build config 00:38:07.475 dma/hisilicon: not in enabled drivers build config 00:38:07.475 dma/idxd: not in enabled drivers build config 00:38:07.475 dma/ioat: not in enabled drivers build config 00:38:07.475 dma/skeleton: not in enabled drivers build config 00:38:07.475 net/af_packet: not in enabled drivers build config 00:38:07.475 net/af_xdp: not in enabled drivers build config 00:38:07.475 net/ark: not in enabled drivers build config 00:38:07.475 net/atlantic: not in enabled drivers build config 00:38:07.475 net/avp: not in enabled drivers build config 00:38:07.475 net/axgbe: not in enabled drivers build config 00:38:07.475 net/bnx2x: not in enabled drivers build config 00:38:07.475 net/bnxt: not in enabled drivers build config 00:38:07.475 net/bonding: not in enabled drivers build config 00:38:07.475 net/cnxk: not in enabled drivers build config 00:38:07.475 net/cpfl: not in enabled drivers build config 00:38:07.475 net/cxgbe: not in enabled drivers build config 00:38:07.475 net/dpaa: not in enabled drivers build config 00:38:07.475 net/dpaa2: not in enabled drivers build config 00:38:07.475 net/e1000: not in enabled drivers build config 00:38:07.475 net/ena: not in enabled drivers build config 00:38:07.475 net/enetc: not in enabled drivers build config 00:38:07.475 net/enetfec: not in enabled drivers build config 00:38:07.475 net/enic: not in enabled drivers build config 00:38:07.475 net/failsafe: not in enabled drivers build config 00:38:07.475 net/fm10k: not in enabled drivers build config 00:38:07.475 net/gve: not in enabled drivers build config 00:38:07.475 net/hinic: not in enabled drivers build config 00:38:07.475 net/hns3: not in enabled drivers build config 00:38:07.475 net/i40e: not in enabled drivers build config 00:38:07.475 net/iavf: not in enabled drivers build config 00:38:07.475 net/ice: not in enabled drivers build config 00:38:07.475 net/idpf: not in enabled drivers build config 00:38:07.475 net/igc: not in enabled drivers build config 00:38:07.475 net/ionic: not in enabled drivers build config 00:38:07.475 net/ipn3ke: not in enabled drivers build config 00:38:07.475 net/ixgbe: not in enabled drivers build config 00:38:07.475 net/mana: not in enabled drivers build config 00:38:07.475 net/memif: not in enabled drivers build config 00:38:07.475 net/mlx4: not in enabled drivers build config 00:38:07.475 net/mlx5: not in enabled drivers build config 00:38:07.475 net/mvneta: not in enabled drivers build config 00:38:07.475 net/mvpp2: not in enabled drivers build config 00:38:07.475 net/netvsc: not in enabled drivers build config 00:38:07.475 net/nfb: not in enabled drivers build config 00:38:07.475 net/nfp: not in enabled drivers build config 00:38:07.475 net/ngbe: not in enabled drivers build config 00:38:07.475 net/null: not in enabled drivers build config 00:38:07.475 net/octeontx: not in enabled drivers build config 00:38:07.475 net/octeon_ep: not in enabled drivers build config 00:38:07.475 net/pcap: not in enabled drivers build config 00:38:07.475 net/pfe: not in enabled drivers build config 00:38:07.475 net/qede: not in enabled drivers build config 00:38:07.475 net/ring: not in enabled drivers build config 00:38:07.475 net/sfc: not in enabled drivers build config 00:38:07.475 net/softnic: not in enabled drivers build config 00:38:07.475 net/tap: not in enabled drivers build config 00:38:07.475 net/thunderx: not in enabled drivers build config 00:38:07.475 net/txgbe: not in enabled drivers build config 00:38:07.475 net/vdev_netvsc: not in enabled drivers build config 00:38:07.475 net/vhost: not in enabled drivers build config 00:38:07.475 net/virtio: not in enabled drivers build config 00:38:07.475 net/vmxnet3: not in enabled drivers build config 00:38:07.475 raw/*: missing internal dependency, "rawdev" 00:38:07.476 crypto/armv8: not in enabled drivers build config 00:38:07.476 crypto/bcmfs: not in enabled drivers build config 00:38:07.476 crypto/caam_jr: not in enabled drivers build config 00:38:07.476 crypto/ccp: not in enabled drivers build config 00:38:07.476 crypto/cnxk: not in enabled drivers build config 00:38:07.476 crypto/dpaa_sec: not in enabled drivers build config 00:38:07.476 crypto/dpaa2_sec: not in enabled drivers build config 00:38:07.476 crypto/ipsec_mb: not in enabled drivers build config 00:38:07.476 crypto/mlx5: not in enabled drivers build config 00:38:07.476 crypto/mvsam: not in enabled drivers build config 00:38:07.476 crypto/nitrox: not in enabled drivers build config 00:38:07.476 crypto/null: not in enabled drivers build config 00:38:07.476 crypto/octeontx: not in enabled drivers build config 00:38:07.476 crypto/openssl: not in enabled drivers build config 00:38:07.476 crypto/scheduler: not in enabled drivers build config 00:38:07.476 crypto/uadk: not in enabled drivers build config 00:38:07.476 crypto/virtio: not in enabled drivers build config 00:38:07.476 compress/isal: not in enabled drivers build config 00:38:07.476 compress/mlx5: not in enabled drivers build config 00:38:07.476 compress/nitrox: not in enabled drivers build config 00:38:07.476 compress/octeontx: not in enabled drivers build config 00:38:07.476 compress/zlib: not in enabled drivers build config 00:38:07.476 regex/*: missing internal dependency, "regexdev" 00:38:07.476 ml/*: missing internal dependency, "mldev" 00:38:07.476 vdpa/ifc: not in enabled drivers build config 00:38:07.476 vdpa/mlx5: not in enabled drivers build config 00:38:07.476 vdpa/nfp: not in enabled drivers build config 00:38:07.476 vdpa/sfc: not in enabled drivers build config 00:38:07.476 event/*: missing internal dependency, "eventdev" 00:38:07.476 baseband/*: missing internal dependency, "bbdev" 00:38:07.476 gpu/*: missing internal dependency, "gpudev" 00:38:07.476 00:38:07.476 00:38:07.476 Build targets in project: 61 00:38:07.476 00:38:07.476 DPDK 24.03.0 00:38:07.476 00:38:07.476 User defined options 00:38:07.476 default_library : static 00:38:07.476 libdir : lib 00:38:07.476 prefix : /mnt/sdadir/spdk/dpdk/build 00:38:07.476 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:38:07.476 c_link_args : 00:38:07.476 cpu_instruction_set: native 00:38:07.476 disable_apps : test,test-eventdev,pdump,test-cmdline,test-fib,graph,test-pmd,test-compress-perf,test-mldev,proc-info,test-crypto-perf,test-gpudev,test-acl,test-bbdev,test-sad,test-regex,dumpcap,test-security-perf,test-pipeline,test-dma-perf,test-flow-perf 00:38:07.476 disable_libs : stack,pdump,efd,pcapng,node,port,graph,distributor,jobstats,latencystats,mldev,gso,bpf,rawdev,ipsec,regexdev,ip_frag,dispatcher,sched,pdcp,gro,eventdev,gpudev,bitratestats,bbdev,metrics,pipeline,argparse,lpm,member,table,acl,rib,fib,cfgfile 00:38:07.476 enable_docs : false 00:38:07.476 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:38:07.476 enable_kmods : false 00:38:07.476 max_lcores : 128 00:38:07.476 tests : false 00:38:07.476 00:38:07.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:38:07.476 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:38:07.476 [1/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:38:07.476 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:38:07.476 [3/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:38:07.476 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:38:07.476 [5/244] Linking static target lib/librte_kvargs.a 00:38:07.476 [6/244] Linking static target lib/librte_log.a 00:38:07.476 [7/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:38:07.476 [8/244] Linking target lib/librte_log.so.24.1 00:38:07.476 [9/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:38:07.476 [10/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:38:07.476 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:38:07.476 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:38:07.476 [13/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:38:07.476 [14/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:38:07.476 [15/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:38:07.476 [16/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:38:07.476 [17/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:38:07.476 [18/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:38:07.476 [19/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:38:07.476 [20/244] Linking target lib/librte_kvargs.so.24.1 00:38:07.476 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:38:07.476 [22/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:38:07.476 [23/244] Linking static target lib/librte_telemetry.a 00:38:07.476 [24/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:38:07.476 [25/244] Linking target lib/librte_telemetry.so.24.1 00:38:07.476 [26/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:38:07.476 [27/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:38:07.476 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:38:07.476 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:38:07.476 [30/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:38:07.476 [31/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:38:07.476 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:38:07.476 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:38:07.476 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:38:07.476 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:38:07.476 [36/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:38:07.476 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:38:07.476 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:38:07.476 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:38:07.476 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:38:07.476 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:38:07.476 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:38:07.476 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:38:07.476 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:38:07.476 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:38:07.734 [46/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:38:07.734 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:38:07.734 [48/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:38:08.034 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:38:08.034 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:38:08.034 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:38:08.034 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:38:08.034 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:38:08.292 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:38:08.292 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:38:08.550 [56/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:38:08.550 [57/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:38:08.550 [58/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:38:08.550 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:38:08.550 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:38:08.550 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:38:08.550 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:38:08.550 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:38:09.116 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:38:09.116 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:38:09.375 [66/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:38:09.375 [67/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:38:09.633 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:38:09.633 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:38:09.633 [70/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:38:09.633 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:38:09.633 [72/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:38:09.633 [73/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:38:09.633 [74/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:38:09.891 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:38:09.891 [76/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:38:09.891 [77/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:38:09.891 [78/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:38:09.891 [79/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:38:10.149 [80/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:38:10.407 [81/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:38:10.665 [82/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:38:10.665 [83/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:38:10.665 [84/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:38:10.665 [85/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:38:10.924 [86/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:38:10.924 [87/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:38:10.924 [88/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:38:11.182 [89/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:38:11.182 [90/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:38:11.182 [91/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:38:11.182 [92/244] Linking static target lib/librte_ring.a 00:38:11.182 [93/244] Linking static target lib/librte_mempool.a 00:38:11.182 [94/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:38:11.440 [95/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:38:11.440 [96/244] Linking static target lib/librte_mbuf.a 00:38:11.440 [97/244] Linking static target lib/librte_rcu.a 00:38:11.440 [98/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:38:11.698 [99/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:38:11.699 [100/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:38:11.699 [101/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:38:11.956 [102/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:38:11.957 [103/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:38:11.957 [104/244] Linking static target lib/librte_meter.a 00:38:11.957 [105/244] Linking static target lib/librte_net.a 00:38:11.957 [106/244] Linking static target lib/librte_eal.a 00:38:12.215 [107/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:38:12.215 [108/244] Linking target lib/librte_eal.so.24.1 00:38:12.215 [109/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:38:12.473 [110/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:38:12.473 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:38:12.473 [112/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:38:12.473 [113/244] Linking target lib/librte_ring.so.24.1 00:38:12.473 [114/244] Linking target lib/librte_meter.so.24.1 00:38:12.731 [115/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:38:12.731 [116/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:38:12.731 [117/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:38:12.989 [118/244] Linking target lib/librte_rcu.so.24.1 00:38:12.989 [119/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:38:12.989 [120/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:38:13.248 [121/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:38:13.248 [122/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:38:13.248 [123/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:38:13.248 [124/244] Linking target lib/librte_mempool.so.24.1 00:38:13.248 [125/244] Linking static target lib/librte_pci.a 00:38:13.506 [126/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:38:13.506 [127/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:38:13.506 [128/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:38:13.506 [129/244] Linking target lib/librte_pci.so.24.1 00:38:13.506 [130/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:38:13.506 [131/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:38:13.506 [132/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:38:13.506 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:38:13.764 [134/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:38:13.764 [135/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:38:13.764 [136/244] Linking target lib/librte_mbuf.so.24.1 00:38:13.764 [137/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:38:13.764 [138/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:38:13.764 [139/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:38:13.764 [140/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:38:13.764 [141/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:38:13.764 [142/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:38:13.764 [143/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:38:14.022 [144/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:38:14.022 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:38:14.022 [146/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:38:14.022 [147/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:38:14.022 [148/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:38:14.022 [149/244] Linking target lib/librte_net.so.24.1 00:38:14.280 [150/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:38:14.280 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:38:14.280 [152/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:38:14.280 [153/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:38:14.539 [154/244] Linking static target lib/librte_cmdline.a 00:38:14.539 [155/244] Linking target lib/librte_cmdline.so.24.1 00:38:14.539 [156/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:38:14.539 [157/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:38:14.797 [158/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:38:15.055 [159/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:38:15.055 [160/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:38:15.055 [161/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:38:15.055 [162/244] Linking static target lib/librte_timer.a 00:38:15.055 [163/244] Linking target lib/librte_timer.so.24.1 00:38:15.313 [164/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:38:15.313 [165/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:38:15.313 [166/244] Linking static target lib/librte_compressdev.a 00:38:15.313 [167/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:38:15.313 [168/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:38:15.313 [169/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:38:15.572 [170/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:38:15.572 [171/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:38:15.572 [172/244] Linking target lib/librte_compressdev.so.24.1 00:38:15.572 [173/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:38:16.138 [174/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:38:16.138 [175/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:38:16.138 [176/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:38:16.138 [177/244] Linking static target lib/librte_dmadev.a 00:38:16.397 [178/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:38:16.397 [179/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:38:16.397 [180/244] Linking target lib/librte_dmadev.so.24.1 00:38:16.397 [181/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:38:16.397 [182/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:38:16.655 [183/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:38:16.655 [184/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:38:16.655 [185/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:38:16.655 [186/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:38:16.655 [187/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:38:16.913 [188/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:38:16.913 [189/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:38:17.171 [190/244] Linking static target lib/librte_hash.a 00:38:17.171 [191/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:38:17.171 [192/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:38:17.171 [193/244] Linking target lib/librte_hash.so.24.1 00:38:17.171 [194/244] Linking target lib/librte_ethdev.so.24.1 00:38:17.429 [195/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:38:17.429 [196/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:38:17.429 [197/244] Linking static target lib/librte_cryptodev.a 00:38:17.429 [198/244] Linking target lib/librte_cryptodev.so.24.1 00:38:17.429 [199/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:38:17.429 [200/244] Linking static target lib/librte_power.a 00:38:17.429 [201/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:38:17.429 [202/244] Linking static target lib/librte_reorder.a 00:38:17.429 [203/244] Linking static target lib/librte_security.a 00:38:17.429 [204/244] Linking target lib/librte_reorder.so.24.1 00:38:17.687 [205/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:38:17.687 [206/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:38:17.687 [207/244] Linking static target lib/librte_ethdev.a 00:38:17.687 [208/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:38:17.687 [209/244] Linking target lib/librte_power.so.24.1 00:38:17.687 [210/244] Linking target lib/librte_security.so.24.1 00:38:18.622 [211/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:38:18.622 [212/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:38:18.622 [213/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:38:18.622 [214/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:38:18.622 [215/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:38:18.622 [216/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:38:18.622 [217/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:38:18.622 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:38:18.880 [219/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:38:18.880 [220/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:38:18.880 [221/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:38:19.139 [222/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:38:19.139 [223/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:38:19.139 [224/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:38:19.432 [225/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:38:19.432 [226/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:38:19.432 [227/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:38:19.432 [228/244] Linking static target drivers/librte_bus_vdev.a 00:38:19.432 [229/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:38:19.432 [230/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:38:19.432 [231/244] Linking target drivers/librte_bus_vdev.so.24.1 00:38:19.704 [232/244] Linking static target drivers/librte_bus_pci.a 00:38:19.704 [233/244] Linking target drivers/librte_bus_pci.so.24.1 00:38:19.962 [234/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:38:19.962 [235/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:38:20.220 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:38:20.478 [237/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:38:20.478 [238/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:38:20.478 [239/244] Linking static target drivers/librte_mempool_ring.a 00:38:20.478 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:38:21.927 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:38:31.914 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:38:31.914 [243/244] Linking target lib/librte_vhost.so.24.1 00:38:31.914 [244/244] Linking static target lib/librte_vhost.a 00:38:31.914 INFO: autodetecting backend as ninja 00:38:31.914 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:38:37.184 CC lib/ut_mock/mock.o 00:38:37.184 CC lib/log/log.o 00:38:37.184 CC lib/log/log_flags.o 00:38:37.184 CC lib/log/log_deprecated.o 00:38:37.184 LIB libspdk_ut_mock.a 00:38:37.184 LIB libspdk_log.a 00:38:37.751 CC lib/util/base64.o 00:38:37.751 CC lib/util/bit_array.o 00:38:37.751 CC lib/util/cpuset.o 00:38:37.751 CC lib/util/crc32.o 00:38:37.751 CC lib/util/crc16.o 00:38:37.751 CC lib/ioat/ioat.o 00:38:37.751 CXX lib/trace_parser/trace.o 00:38:37.751 CC lib/util/crc32_ieee.o 00:38:37.751 CC lib/util/crc64.o 00:38:37.751 CC lib/util/crc32c.o 00:38:37.751 CC lib/util/dif.o 00:38:37.751 CC lib/dma/dma.o 00:38:37.751 CC lib/util/fd.o 00:38:37.751 CC lib/util/fd_group.o 00:38:37.751 CC lib/util/file.o 00:38:37.751 CC lib/util/hexlify.o 00:38:37.751 CC lib/util/iov.o 00:38:37.751 CC lib/util/math.o 00:38:37.751 CC lib/util/net.o 00:38:37.751 CC lib/util/pipe.o 00:38:37.751 CC lib/util/strerror_tls.o 00:38:37.751 CC lib/util/string.o 00:38:37.751 CC lib/util/uuid.o 00:38:37.751 CC lib/util/xor.o 00:38:37.751 CC lib/util/zipf.o 00:38:38.010 CC lib/vfio_user/host/vfio_user_pci.o 00:38:38.010 CC lib/vfio_user/host/vfio_user.o 00:38:38.577 LIB libspdk_dma.a 00:38:38.577 LIB libspdk_ioat.a 00:38:38.577 LIB libspdk_vfio_user.a 00:38:38.835 LIB libspdk_trace_parser.a 00:38:39.093 LIB libspdk_util.a 00:38:40.029 CC lib/conf/conf.o 00:38:40.029 CC lib/json/json_parse.o 00:38:40.029 CC lib/json/json_write.o 00:38:40.029 CC lib/env_dpdk/env.o 00:38:40.029 CC lib/json/json_util.o 00:38:40.029 CC lib/env_dpdk/memory.o 00:38:40.029 CC lib/env_dpdk/pci.o 00:38:40.029 CC lib/env_dpdk/init.o 00:38:40.029 CC lib/env_dpdk/threads.o 00:38:40.029 CC lib/vmd/vmd.o 00:38:40.029 CC lib/vmd/led.o 00:38:40.029 CC lib/env_dpdk/pci_virtio.o 00:38:40.029 CC lib/env_dpdk/pci_ioat.o 00:38:40.029 CC lib/env_dpdk/pci_vmd.o 00:38:40.029 CC lib/env_dpdk/pci_idxd.o 00:38:40.029 CC lib/env_dpdk/pci_event.o 00:38:40.029 CC lib/env_dpdk/sigbus_handler.o 00:38:40.029 CC lib/env_dpdk/pci_dpdk.o 00:38:40.029 CC lib/env_dpdk/pci_dpdk_2207.o 00:38:40.029 CC lib/env_dpdk/pci_dpdk_2211.o 00:38:40.596 LIB libspdk_conf.a 00:38:40.854 LIB libspdk_vmd.a 00:38:40.854 LIB libspdk_json.a 00:38:41.420 CC lib/jsonrpc/jsonrpc_server.o 00:38:41.420 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:38:41.420 CC lib/jsonrpc/jsonrpc_client.o 00:38:41.420 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:38:41.677 LIB libspdk_jsonrpc.a 00:38:41.934 LIB libspdk_env_dpdk.a 00:38:42.226 CC lib/rpc/rpc.o 00:38:42.793 LIB libspdk_rpc.a 00:38:43.051 CC lib/keyring/keyring.o 00:38:43.051 CC lib/keyring/keyring_rpc.o 00:38:43.051 CC lib/notify/notify_rpc.o 00:38:43.051 CC lib/notify/notify.o 00:38:43.051 CC lib/trace/trace.o 00:38:43.051 CC lib/trace/trace_rpc.o 00:38:43.051 CC lib/trace/trace_flags.o 00:38:43.618 LIB libspdk_notify.a 00:38:43.618 LIB libspdk_keyring.a 00:38:43.618 LIB libspdk_trace.a 00:38:44.189 CC lib/sock/sock.o 00:38:44.189 CC lib/sock/sock_rpc.o 00:38:44.189 CC lib/thread/iobuf.o 00:38:44.189 CC lib/thread/thread.o 00:38:44.756 LIB libspdk_sock.a 00:38:45.323 CC lib/nvme/nvme_ctrlr_cmd.o 00:38:45.323 CC lib/nvme/nvme_ctrlr.o 00:38:45.323 CC lib/nvme/nvme_fabric.o 00:38:45.323 CC lib/nvme/nvme_ns_cmd.o 00:38:45.323 CC lib/nvme/nvme_ns.o 00:38:45.323 CC lib/nvme/nvme_pcie_common.o 00:38:45.323 CC lib/nvme/nvme_pcie.o 00:38:45.323 CC lib/nvme/nvme_qpair.o 00:38:45.323 CC lib/nvme/nvme.o 00:38:45.323 CC lib/nvme/nvme_quirks.o 00:38:45.323 CC lib/nvme/nvme_transport.o 00:38:45.323 CC lib/nvme/nvme_discovery.o 00:38:45.323 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:38:45.323 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:38:45.323 CC lib/nvme/nvme_tcp.o 00:38:45.323 CC lib/nvme/nvme_opal.o 00:38:45.323 CC lib/nvme/nvme_io_msg.o 00:38:45.323 CC lib/nvme/nvme_zns.o 00:38:45.323 CC lib/nvme/nvme_poll_group.o 00:38:45.323 CC lib/nvme/nvme_stubs.o 00:38:45.323 CC lib/nvme/nvme_auth.o 00:38:45.323 CC lib/nvme/nvme_cuse.o 00:38:46.258 LIB libspdk_thread.a 00:38:47.633 CC lib/blob/blobstore.o 00:38:47.633 CC lib/init/json_config.o 00:38:47.633 CC lib/blob/request.o 00:38:47.633 CC lib/virtio/virtio.o 00:38:47.633 CC lib/blob/zeroes.o 00:38:47.633 CC lib/virtio/virtio_vhost_user.o 00:38:47.633 CC lib/accel/accel.o 00:38:47.633 CC lib/init/subsystem.o 00:38:47.633 CC lib/init/subsystem_rpc.o 00:38:47.633 CC lib/blob/blob_bs_dev.o 00:38:47.633 CC lib/virtio/virtio_vfio_user.o 00:38:47.633 CC lib/virtio/virtio_pci.o 00:38:47.633 CC lib/accel/accel_rpc.o 00:38:47.633 CC lib/init/rpc.o 00:38:47.633 CC lib/accel/accel_sw.o 00:38:48.567 LIB libspdk_init.a 00:38:48.567 LIB libspdk_virtio.a 00:38:48.825 CC lib/event/app.o 00:38:48.825 CC lib/event/reactor.o 00:38:48.825 CC lib/event/app_rpc.o 00:38:48.825 CC lib/event/log_rpc.o 00:38:48.825 CC lib/event/scheduler_static.o 00:38:49.083 LIB libspdk_nvme.a 00:38:49.341 LIB libspdk_accel.a 00:38:49.599 LIB libspdk_event.a 00:38:50.165 CC lib/bdev/bdev.o 00:38:50.165 CC lib/bdev/bdev_zone.o 00:38:50.165 CC lib/bdev/bdev_rpc.o 00:38:50.165 CC lib/bdev/part.o 00:38:50.165 CC lib/bdev/scsi_nvme.o 00:38:51.537 LIB libspdk_blob.a 00:38:53.006 CC lib/blobfs/tree.o 00:38:53.007 CC lib/blobfs/blobfs.o 00:38:53.007 CC lib/lvol/lvol.o 00:38:53.571 LIB libspdk_bdev.a 00:38:53.856 LIB libspdk_blobfs.a 00:38:54.122 LIB libspdk_lvol.a 00:38:55.056 CC lib/scsi/dev.o 00:38:55.056 CC lib/scsi/lun.o 00:38:55.056 CC lib/scsi/port.o 00:38:55.056 CC lib/scsi/scsi.o 00:38:55.056 CC lib/scsi/scsi_bdev.o 00:38:55.056 CC lib/ftl/ftl_init.o 00:38:55.056 CC lib/nvmf/ctrlr.o 00:38:55.056 CC lib/scsi/scsi_pr.o 00:38:55.056 CC lib/ftl/ftl_core.o 00:38:55.056 CC lib/ftl/ftl_debug.o 00:38:55.056 CC lib/ftl/ftl_layout.o 00:38:55.056 CC lib/scsi/scsi_rpc.o 00:38:55.056 CC lib/ftl/ftl_io.o 00:38:55.056 CC lib/nvmf/ctrlr_discovery.o 00:38:55.056 CC lib/scsi/task.o 00:38:55.056 CC lib/ftl/ftl_sb.o 00:38:55.056 CC lib/ftl/ftl_l2p.o 00:38:55.056 CC lib/ftl/ftl_l2p_flat.o 00:38:55.056 CC lib/ftl/ftl_nv_cache.o 00:38:55.056 CC lib/nvmf/ctrlr_bdev.o 00:38:55.056 CC lib/ftl/ftl_band.o 00:38:55.056 CC lib/nvmf/subsystem.o 00:38:55.056 CC lib/nbd/nbd.o 00:38:55.056 CC lib/ftl/ftl_band_ops.o 00:38:55.056 CC lib/nvmf/nvmf.o 00:38:55.056 CC lib/nvmf/nvmf_rpc.o 00:38:55.056 CC lib/ftl/ftl_writer.o 00:38:55.056 CC lib/nbd/nbd_rpc.o 00:38:55.056 CC lib/ftl/ftl_rq.o 00:38:55.056 CC lib/nvmf/transport.o 00:38:55.056 CC lib/ftl/ftl_reloc.o 00:38:55.056 CC lib/nvmf/tcp.o 00:38:55.056 CC lib/ftl/ftl_l2p_cache.o 00:38:55.056 CC lib/ftl/ftl_p2l.o 00:38:55.056 CC lib/nvmf/stubs.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:38:55.056 CC lib/nvmf/mdns_server.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_startup.o 00:38:55.056 CC lib/nvmf/auth.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_md.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_misc.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_band.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:38:55.056 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:38:55.056 CC lib/ftl/utils/ftl_conf.o 00:38:55.056 CC lib/ftl/utils/ftl_md.o 00:38:55.056 CC lib/ftl/utils/ftl_mempool.o 00:38:55.056 CC lib/ftl/utils/ftl_bitmap.o 00:38:55.056 CC lib/ftl/utils/ftl_property.o 00:38:55.056 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:38:55.056 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:38:55.056 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:38:55.056 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:38:55.314 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:38:55.314 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:38:55.314 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:38:55.314 CC lib/ftl/upgrade/ftl_sb_v3.o 00:38:55.314 CC lib/ftl/upgrade/ftl_sb_v5.o 00:38:55.314 CC lib/ftl/nvc/ftl_nvc_dev.o 00:38:55.314 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:38:55.314 CC lib/ftl/base/ftl_base_dev.o 00:38:55.314 CC lib/ftl/base/ftl_base_bdev.o 00:38:57.843 LIB libspdk_nbd.a 00:38:57.843 LIB libspdk_scsi.a 00:38:57.843 LIB libspdk_ftl.a 00:38:58.100 LIB libspdk_nvmf.a 00:39:03.367 CC lib/iscsi/init_grp.o 00:39:03.367 CC lib/iscsi/conn.o 00:39:03.367 CC lib/iscsi/iscsi.o 00:39:03.367 CC lib/iscsi/md5.o 00:39:03.367 CC lib/iscsi/param.o 00:39:03.367 CC lib/iscsi/portal_grp.o 00:39:03.367 CC lib/vhost/vhost.o 00:39:03.367 CC lib/iscsi/tgt_node.o 00:39:03.367 CC lib/vhost/vhost_rpc.o 00:39:03.367 CC lib/vhost/vhost_scsi.o 00:39:03.367 CC lib/iscsi/iscsi_subsystem.o 00:39:03.367 CC lib/iscsi/iscsi_rpc.o 00:39:03.367 CC lib/iscsi/task.o 00:39:03.367 CC lib/vhost/vhost_blk.o 00:39:03.367 CC lib/vhost/rte_vhost_user.o 00:39:04.302 LIB libspdk_vhost.a 00:39:04.868 LIB libspdk_iscsi.a 00:39:06.244 CC module/env_dpdk/env_dpdk_rpc.o 00:39:06.244 CC module/scheduler/gscheduler/gscheduler.o 00:39:06.244 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:39:06.244 CC module/keyring/file/keyring.o 00:39:06.244 CC module/keyring/file/keyring_rpc.o 00:39:06.244 CC module/blob/bdev/blob_bdev.o 00:39:06.244 CC module/accel/error/accel_error.o 00:39:06.244 CC module/accel/error/accel_error_rpc.o 00:39:06.244 CC module/scheduler/dynamic/scheduler_dynamic.o 00:39:06.244 CC module/keyring/linux/keyring.o 00:39:06.244 CC module/sock/posix/posix.o 00:39:06.244 CC module/accel/ioat/accel_ioat.o 00:39:06.244 CC module/keyring/linux/keyring_rpc.o 00:39:06.244 CC module/accel/ioat/accel_ioat_rpc.o 00:39:06.502 LIB libspdk_env_dpdk_rpc.a 00:39:06.760 LIB libspdk_keyring_file.a 00:39:06.760 LIB libspdk_scheduler_dpdk_governor.a 00:39:06.760 LIB libspdk_scheduler_gscheduler.a 00:39:06.760 LIB libspdk_keyring_linux.a 00:39:06.760 LIB libspdk_accel_error.a 00:39:06.760 LIB libspdk_scheduler_dynamic.a 00:39:06.760 LIB libspdk_accel_ioat.a 00:39:06.760 LIB libspdk_blob_bdev.a 00:39:07.326 LIB libspdk_sock_posix.a 00:39:07.584 CC module/bdev/passthru/vbdev_passthru.o 00:39:07.584 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:39:07.584 CC module/bdev/error/vbdev_error.o 00:39:07.584 CC module/bdev/delay/vbdev_delay.o 00:39:07.584 CC module/bdev/error/vbdev_error_rpc.o 00:39:07.584 CC module/bdev/delay/vbdev_delay_rpc.o 00:39:07.584 CC module/bdev/raid/bdev_raid.o 00:39:07.584 CC module/blobfs/bdev/blobfs_bdev.o 00:39:07.584 CC module/bdev/raid/bdev_raid_rpc.o 00:39:07.584 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:39:07.584 CC module/bdev/raid/bdev_raid_sb.o 00:39:07.584 CC module/bdev/null/bdev_null.o 00:39:07.584 CC module/bdev/zone_block/vbdev_zone_block.o 00:39:07.584 CC module/bdev/gpt/gpt.o 00:39:07.584 CC module/bdev/raid/raid0.o 00:39:07.584 CC module/bdev/null/bdev_null_rpc.o 00:39:07.584 CC module/bdev/raid/raid1.o 00:39:07.584 CC module/bdev/gpt/vbdev_gpt.o 00:39:07.584 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:39:07.584 CC module/bdev/raid/concat.o 00:39:07.584 CC module/bdev/lvol/vbdev_lvol.o 00:39:07.584 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:39:07.584 CC module/bdev/split/vbdev_split.o 00:39:07.584 CC module/bdev/malloc/bdev_malloc.o 00:39:07.584 CC module/bdev/malloc/bdev_malloc_rpc.o 00:39:07.584 CC module/bdev/split/vbdev_split_rpc.o 00:39:07.584 CC module/bdev/ftl/bdev_ftl.o 00:39:07.584 CC module/bdev/aio/bdev_aio.o 00:39:07.584 CC module/bdev/ftl/bdev_ftl_rpc.o 00:39:07.584 CC module/bdev/aio/bdev_aio_rpc.o 00:39:07.584 CC module/bdev/virtio/bdev_virtio_scsi.o 00:39:07.584 CC module/bdev/virtio/bdev_virtio_blk.o 00:39:07.584 CC module/bdev/virtio/bdev_virtio_rpc.o 00:39:07.584 CC module/bdev/nvme/bdev_nvme.o 00:39:07.584 CC module/bdev/nvme/bdev_nvme_rpc.o 00:39:07.584 CC module/bdev/nvme/nvme_rpc.o 00:39:07.584 CC module/bdev/nvme/bdev_mdns_client.o 00:39:07.842 CC module/bdev/nvme/vbdev_opal.o 00:39:07.842 CC module/bdev/nvme/vbdev_opal_rpc.o 00:39:07.842 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:39:08.776 LIB libspdk_blobfs_bdev.a 00:39:09.034 LIB libspdk_bdev_error.a 00:39:09.034 LIB libspdk_bdev_aio.a 00:39:09.034 LIB libspdk_bdev_split.a 00:39:09.034 LIB libspdk_bdev_null.a 00:39:09.034 LIB libspdk_bdev_gpt.a 00:39:09.034 LIB libspdk_bdev_passthru.a 00:39:09.034 LIB libspdk_bdev_malloc.a 00:39:09.034 LIB libspdk_bdev_zone_block.a 00:39:09.034 LIB libspdk_bdev_ftl.a 00:39:09.034 LIB libspdk_bdev_delay.a 00:39:09.292 LIB libspdk_bdev_virtio.a 00:39:09.292 LIB libspdk_bdev_lvol.a 00:39:09.858 LIB libspdk_bdev_raid.a 00:39:10.793 LIB libspdk_bdev_nvme.a 00:39:12.693 CC module/event/subsystems/vmd/vmd.o 00:39:12.693 CC module/event/subsystems/scheduler/scheduler.o 00:39:12.693 CC module/event/subsystems/vmd/vmd_rpc.o 00:39:12.693 CC module/event/subsystems/sock/sock.o 00:39:12.693 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:39:12.693 CC module/event/subsystems/keyring/keyring.o 00:39:12.693 CC module/event/subsystems/iobuf/iobuf.o 00:39:12.693 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:39:12.693 LIB libspdk_event_vhost_blk.a 00:39:12.693 LIB libspdk_event_keyring.a 00:39:12.693 LIB libspdk_event_sock.a 00:39:12.693 LIB libspdk_event_scheduler.a 00:39:12.693 LIB libspdk_event_iobuf.a 00:39:12.693 LIB libspdk_event_vmd.a 00:39:13.260 CC module/event/subsystems/accel/accel.o 00:39:13.519 LIB libspdk_event_accel.a 00:39:14.084 CC module/event/subsystems/bdev/bdev.o 00:39:14.342 LIB libspdk_event_bdev.a 00:39:14.601 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:39:14.601 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:39:14.601 CC module/event/subsystems/scsi/scsi.o 00:39:14.601 CC module/event/subsystems/nbd/nbd.o 00:39:14.859 LIB libspdk_event_nbd.a 00:39:14.859 LIB libspdk_event_scsi.a 00:39:15.119 LIB libspdk_event_nvmf.a 00:39:15.377 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:39:15.377 CC module/event/subsystems/iscsi/iscsi.o 00:39:15.636 LIB libspdk_event_vhost_scsi.a 00:39:15.636 LIB libspdk_event_iscsi.a 00:39:15.894 make[1]: Nothing to be done for 'all'. 00:39:16.152 CXX app/trace/trace.o 00:39:16.152 CC app/trace_record/trace_record.o 00:39:16.152 CC app/spdk_lspci/spdk_lspci.o 00:39:16.152 CC app/spdk_nvme_discover/discovery_aer.o 00:39:16.152 CC app/spdk_nvme_perf/perf.o 00:39:16.152 CC app/spdk_top/spdk_top.o 00:39:16.152 CC app/spdk_nvme_identify/identify.o 00:39:16.152 CC app/nvmf_tgt/nvmf_main.o 00:39:16.152 CC app/iscsi_tgt/iscsi_tgt.o 00:39:16.152 CC examples/interrupt_tgt/interrupt_tgt.o 00:39:16.152 CC app/spdk_dd/spdk_dd.o 00:39:16.152 CC app/spdk_tgt/spdk_tgt.o 00:39:16.410 CC examples/util/zipf/zipf.o 00:39:16.410 CC examples/ioat/perf/perf.o 00:39:16.410 CC examples/ioat/verify/verify.o 00:39:16.410 LINK spdk_lspci 00:39:16.668 LINK zipf 00:39:16.668 LINK iscsi_tgt 00:39:16.668 LINK nvmf_tgt 00:39:16.927 LINK spdk_nvme_discover 00:39:16.927 LINK spdk_tgt 00:39:16.927 LINK ioat_perf 00:39:16.927 LINK interrupt_tgt 00:39:16.927 LINK spdk_trace_record 00:39:16.927 LINK verify 00:39:16.927 LINK spdk_trace 00:39:17.185 LINK spdk_dd 00:39:18.119 LINK spdk_nvme_perf 00:39:18.119 LINK spdk_top 00:39:18.119 LINK spdk_nvme_identify 00:39:19.532 CC app/vhost/vhost.o 00:39:19.790 LINK vhost 00:39:23.091 CC examples/vmd/lsvmd/lsvmd.o 00:39:23.091 CC examples/vmd/led/led.o 00:39:23.091 CC examples/sock/hello_world/hello_sock.o 00:39:23.091 CC examples/thread/thread/thread_ex.o 00:39:23.091 LINK lsvmd 00:39:23.091 LINK led 00:39:23.349 LINK hello_sock 00:39:23.349 LINK thread 00:39:31.458 CC examples/nvme/cmb_copy/cmb_copy.o 00:39:31.458 CC examples/nvme/hotplug/hotplug.o 00:39:31.458 CC examples/nvme/abort/abort.o 00:39:31.458 CC examples/nvme/reconnect/reconnect.o 00:39:31.458 CC examples/nvme/nvme_manage/nvme_manage.o 00:39:31.458 CC examples/nvme/hello_world/hello_world.o 00:39:31.458 CC examples/nvme/arbitration/arbitration.o 00:39:31.458 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:39:31.458 LINK pmr_persistence 00:39:31.458 LINK cmb_copy 00:39:31.458 LINK hello_world 00:39:31.458 LINK hotplug 00:39:31.458 LINK abort 00:39:31.458 LINK reconnect 00:39:31.458 LINK arbitration 00:39:32.024 LINK nvme_manage 00:39:44.223 CC examples/accel/perf/accel_perf.o 00:39:44.223 CC examples/blob/cli/blobcli.o 00:39:44.223 CC examples/blob/hello_world/hello_blob.o 00:39:44.481 LINK hello_blob 00:39:45.048 LINK accel_perf 00:39:45.048 LINK blobcli 00:39:50.314 CC examples/bdev/bdevperf/bdevperf.o 00:39:50.314 CC examples/bdev/hello_world/hello_bdev.o 00:39:50.586 LINK hello_bdev 00:39:51.601 LINK bdevperf 00:40:01.562 CC examples/nvmf/nvmf/nvmf.o 00:40:01.562 LINK nvmf 00:40:13.774 make: Leaving directory '/mnt/sdadir/spdk' 00:40:13.774 17:18:14 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:41:09.993 READ IO cnt: 44 merges: 0 sectors: 1184 ticks: 30 00:41:09.993 WRITE IO cnt: 635204 merges: 631318 sectors: 10917128 ticks: 670912 00:41:09.993 in flight: 0 io ticks: 302329 time in queue: 741225 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 44 0 1184 30 00:41:09.993 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 635204 631318 10917128 670912 00:41:09.994 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 302329 741225 00:41:09.994 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:41:09.994 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:41:09.994 [2024-07-22 17:19:05.615209] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:41:09.994 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:41:09.994 17:19:05 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 83128 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 83128 ']' 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 83128 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83128 00:41:09.994 killing process with pid 83128 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83128' 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 83128 00:41:09.994 17:19:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 83128 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:41:09.994 Cleaning up iSCSI connection 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:41:09.994 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:41:09.994 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:41:09.994 ************************************ 00:41:09.994 END TEST iscsi_tgt_ext4test 00:41:09.994 ************************************ 00:41:09.994 00:41:09.994 real 7m18.628s 00:41:09.994 user 12m34.408s 00:41:09.994 sys 2m55.921s 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:41:09.994 17:19:10 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:41:09.994 17:19:10 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:41:09.994 17:19:10 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:41:09.994 17:19:10 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:41:09.994 17:19:10 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:09.994 17:19:10 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:09.994 17:19:10 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:41:09.994 ************************************ 00:41:09.994 START TEST iscsi_tgt_rbd 00:41:09.994 ************************************ 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:41:09.994 * Looking for test storage... 00:41:09.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1005 -- # '[' -z 10.0.0.1 ']' 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1009 -- # '[' -n spdk_iscsi_ns ']' 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # ip netns list 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # grep spdk_iscsi_ns 00:41:09.994 spdk_iscsi_ns (id: 0) 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:41:09.994 17:19:10 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:41:09.994 + base_dir=/var/tmp/ceph 00:41:09.994 + image=/var/tmp/ceph/ceph_raw.img 00:41:09.994 + dev=/dev/loop200 00:41:09.994 + pkill -9 ceph 00:41:09.994 + sleep 3 00:41:11.892 + umount /dev/loop200p2 00:41:11.892 umount: /dev/loop200p2: no mount point specified. 00:41:11.892 + losetup -d /dev/loop200 00:41:11.892 losetup: /dev/loop200: failed to use device: No such device 00:41:11.892 + rm -rf /var/tmp/ceph 00:41:11.892 17:19:13 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:41:11.892 + set -e 00:41:11.892 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:41:11.892 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:41:11.892 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:41:11.892 + base_dir=/var/tmp/ceph 00:41:11.892 + mon_ip=10.0.0.1 00:41:11.892 + mon_dir=/var/tmp/ceph/mon.a 00:41:11.892 + pid_dir=/var/tmp/ceph/pid 00:41:11.892 + ceph_conf=/var/tmp/ceph/ceph.conf 00:41:11.892 + mnt_dir=/var/tmp/ceph/mnt 00:41:11.892 + image=/var/tmp/ceph_raw.img 00:41:11.892 + dev=/dev/loop200 00:41:11.892 + modprobe loop 00:41:11.892 + umount /dev/loop200p2 00:41:11.892 umount: /dev/loop200p2: no mount point specified. 00:41:11.892 + true 00:41:11.892 + losetup -d /dev/loop200 00:41:11.892 losetup: /dev/loop200: failed to use device: No such device 00:41:11.892 + true 00:41:11.892 + '[' -d /var/tmp/ceph ']' 00:41:11.892 + mkdir /var/tmp/ceph 00:41:11.892 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:41:11.892 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:41:11.892 + fallocate -l 4G /var/tmp/ceph_raw.img 00:41:11.892 + mknod /dev/loop200 b 7 200 00:41:11.892 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:41:11.892 + PARTED='parted -s' 00:41:11.892 + SGDISK=sgdisk 00:41:11.892 + echo 'Partitioning /dev/loop200' 00:41:11.892 Partitioning /dev/loop200 00:41:11.892 + parted -s /dev/loop200 mktable gpt 00:41:11.892 + sleep 2 00:41:13.791 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:41:14.049 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:41:14.049 + partno=0 00:41:14.049 + echo 'Setting name on /dev/loop200' 00:41:14.049 Setting name on /dev/loop200 00:41:14.049 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:41:14.983 Warning: The kernel is still using the old partition table. 00:41:14.983 The new table will be used at the next reboot or after you 00:41:14.983 run partprobe(8) or kpartx(8) 00:41:14.983 The operation has completed successfully. 00:41:14.983 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:41:15.915 Warning: The kernel is still using the old partition table. 00:41:15.915 The new table will be used at the next reboot or after you 00:41:15.915 run partprobe(8) or kpartx(8) 00:41:15.915 The operation has completed successfully. 00:41:15.915 + kpartx /dev/loop200 00:41:15.915 loop200p1 : 0 4192256 /dev/loop200 2048 00:41:15.915 loop200p2 : 0 4192256 /dev/loop200 4194304 00:41:15.915 ++ awk '{print $3}' 00:41:15.915 ++ ceph -v 00:41:16.173 + ceph_version=17.2.7 00:41:16.173 + ceph_maj=17 00:41:16.173 + '[' 17 -gt 12 ']' 00:41:16.173 + update_config=true 00:41:16.173 + rm -f /var/log/ceph/ceph-mon.a.log 00:41:16.173 + set_min_mon_release='--set-min-mon-release 14' 00:41:16.173 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:41:16.173 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:41:16.173 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:41:16.173 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:41:16.173 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:41:16.173 = sectsz=512 attr=2, projid32bit=1 00:41:16.173 = crc=1 finobt=1, sparse=1, rmapbt=0 00:41:16.173 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:41:16.173 data = bsize=4096 blocks=524032, imaxpct=25 00:41:16.173 = sunit=0 swidth=0 blks 00:41:16.173 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:41:16.173 log =internal log bsize=4096 blocks=16384, version=2 00:41:16.173 = sectsz=512 sunit=0 blks, lazy-count=1 00:41:16.173 realtime =none extsz=4096 blocks=0, rtextents=0 00:41:16.173 Discarding blocks...Done. 00:41:16.173 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:41:16.173 + cat 00:41:16.173 + rm -rf '/var/tmp/ceph/mon.a/*' 00:41:16.173 + mkdir -p /var/tmp/ceph/mon.a 00:41:16.173 + mkdir -p /var/tmp/ceph/pid 00:41:16.173 + rm -f /etc/ceph/ceph.client.admin.keyring 00:41:16.173 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:41:16.173 creating /var/tmp/ceph/keyring 00:41:16.173 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:41:16.431 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:41:16.432 monmaptool: monmap file /var/tmp/ceph/monmap 00:41:16.432 monmaptool: generated fsid 174376fa-68de-448f-b913-03dc7d84f76b 00:41:16.432 setting min_mon_release = octopus 00:41:16.432 epoch 0 00:41:16.432 fsid 174376fa-68de-448f-b913-03dc7d84f76b 00:41:16.432 last_changed 2024-07-22T17:19:17.853120+0000 00:41:16.432 created 2024-07-22T17:19:17.853120+0000 00:41:16.432 min_mon_release 15 (octopus) 00:41:16.432 election_strategy: 1 00:41:16.432 0: v2:10.0.0.1:12046/0 mon.a 00:41:16.432 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:41:16.432 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:41:16.432 + '[' true = true ']' 00:41:16.432 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:41:16.432 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:41:16.432 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:41:16.432 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:41:16.432 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:41:16.432 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:41:16.432 ++ hostname 00:41:16.432 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:41:16.690 + true 00:41:16.690 + '[' true = true ']' 00:41:16.690 + ceph-conf --name mon.a --show-config-value log_file 00:41:16.690 /var/log/ceph/ceph-mon.a.log 00:41:16.690 ++ ceph -s 00:41:16.690 ++ grep id 00:41:16.690 ++ awk '{print $2}' 00:41:16.947 + fsid=174376fa-68de-448f-b913-03dc7d84f76b 00:41:16.947 + sed -i 's/perf = true/perf = true\n\tfsid = 174376fa-68de-448f-b913-03dc7d84f76b \n/g' /var/tmp/ceph/ceph.conf 00:41:16.947 + (( ceph_maj < 18 )) 00:41:16.947 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:41:16.947 + cat /var/tmp/ceph/ceph.conf 00:41:16.947 [global] 00:41:16.947 debug_lockdep = 0/0 00:41:16.947 debug_context = 0/0 00:41:16.947 debug_crush = 0/0 00:41:16.947 debug_buffer = 0/0 00:41:16.947 debug_timer = 0/0 00:41:16.947 debug_filer = 0/0 00:41:16.947 debug_objecter = 0/0 00:41:16.947 debug_rados = 0/0 00:41:16.947 debug_rbd = 0/0 00:41:16.947 debug_ms = 0/0 00:41:16.947 debug_monc = 0/0 00:41:16.947 debug_tp = 0/0 00:41:16.947 debug_auth = 0/0 00:41:16.947 debug_finisher = 0/0 00:41:16.947 debug_heartbeatmap = 0/0 00:41:16.947 debug_perfcounter = 0/0 00:41:16.947 debug_asok = 0/0 00:41:16.947 debug_throttle = 0/0 00:41:16.947 debug_mon = 0/0 00:41:16.947 debug_paxos = 0/0 00:41:16.947 debug_rgw = 0/0 00:41:16.947 00:41:16.947 perf = true 00:41:16.947 osd objectstore = filestore 00:41:16.947 00:41:16.947 fsid = 174376fa-68de-448f-b913-03dc7d84f76b 00:41:16.947 00:41:16.947 mutex_perf_counter = false 00:41:16.947 throttler_perf_counter = false 00:41:16.947 rbd cache = false 00:41:16.947 mon_allow_pool_delete = true 00:41:16.947 00:41:16.947 osd_pool_default_size = 1 00:41:16.947 00:41:16.947 [mon] 00:41:16.947 mon_max_pool_pg_num=166496 00:41:16.947 mon_osd_max_split_count = 10000 00:41:16.947 mon_pg_warn_max_per_osd = 10000 00:41:16.947 00:41:16.947 [osd] 00:41:16.947 osd_op_threads = 64 00:41:16.947 filestore_queue_max_ops=5000 00:41:16.947 filestore_queue_committing_max_ops=5000 00:41:16.947 journal_max_write_entries=1000 00:41:16.947 journal_queue_max_ops=3000 00:41:16.947 objecter_inflight_ops=102400 00:41:16.947 filestore_wbthrottle_enable=false 00:41:16.947 filestore_queue_max_bytes=1048576000 00:41:16.947 filestore_queue_committing_max_bytes=1048576000 00:41:16.947 journal_max_write_bytes=1048576000 00:41:16.947 journal_queue_max_bytes=1048576000 00:41:16.947 ms_dispatch_throttle_bytes=1048576000 00:41:16.947 objecter_inflight_op_bytes=1048576000 00:41:16.947 filestore_max_sync_interval=10 00:41:16.947 osd_client_message_size_cap = 0 00:41:16.947 osd_client_message_cap = 0 00:41:16.947 osd_enable_op_tracker = false 00:41:16.947 filestore_fd_cache_size = 10240 00:41:16.947 filestore_fd_cache_shards = 64 00:41:16.947 filestore_op_threads = 16 00:41:16.947 osd_op_num_shards = 48 00:41:16.947 osd_op_num_threads_per_shard = 2 00:41:16.947 osd_pg_object_context_cache_count = 10240 00:41:16.947 filestore_odsync_write = True 00:41:16.947 journal_dynamic_throttle = True 00:41:16.947 00:41:16.947 [osd.0] 00:41:16.947 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:41:16.947 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:41:16.947 00:41:16.947 # add mon address 00:41:16.947 [mon.a] 00:41:16.947 mon addr = v2:10.0.0.1:12046 00:41:16.947 + i=0 00:41:16.947 + mkdir -p /var/tmp/ceph/mnt 00:41:16.947 ++ uuidgen 00:41:16.947 + uuid=3027529d-31c2-428b-a7df-2c419f7b13c3 00:41:16.947 + ceph -c /var/tmp/ceph/ceph.conf osd create 3027529d-31c2-428b-a7df-2c419f7b13c3 0 00:41:17.205 0 00:41:17.205 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 3027529d-31c2-428b-a7df-2c419f7b13c3 --check-needs-journal --no-mon-config 00:41:17.205 2024-07-22T17:19:18.784+0000 7fa5d4a3f400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:41:17.205 2024-07-22T17:19:18.785+0000 7fa5d4a3f400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:41:17.463 2024-07-22T17:19:18.826+0000 7fa5d4a3f400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 3027529d-31c2-428b-a7df-2c419f7b13c3, invalid (someone else's?) journal 00:41:17.463 2024-07-22T17:19:18.858+0000 7fa5d4a3f400 -1 journal do_read_entry(4096): bad header magic 00:41:17.463 2024-07-22T17:19:18.858+0000 7fa5d4a3f400 -1 journal do_read_entry(4096): bad header magic 00:41:17.463 ++ hostname 00:41:17.463 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:41:18.846 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:41:18.846 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:41:18.846 added key for osd.0 00:41:19.103 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:41:19.103 + class_dir=/lib64/rados-classes 00:41:19.103 + [[ -e /lib64/rados-classes ]] 00:41:19.103 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:41:19.669 + pkill -9 ceph-osd 00:41:19.669 + true 00:41:19.669 + sleep 2 00:41:21.589 + mkdir -p /var/tmp/ceph/pid 00:41:21.589 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:41:21.589 2024-07-22T17:19:23.095+0000 7f3dde048400 -1 Falling back to public interface 00:41:21.589 2024-07-22T17:19:23.134+0000 7f3dde048400 -1 journal do_read_entry(8192): bad header magic 00:41:21.589 2024-07-22T17:19:23.134+0000 7f3dde048400 -1 journal do_read_entry(8192): bad header magic 00:41:21.589 2024-07-22T17:19:23.157+0000 7f3dde048400 -1 osd.0 0 log_to_monitors true 00:41:22.969 17:19:24 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:41:23.921 pool 'rbd' created 00:41:23.921 17:19:25 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1026 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=123525 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 123525 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@829 -- # '[' -z 123525 ']' 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:29.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:29.199 17:19:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 [2024-07-22 17:19:30.617725] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:29.199 [2024-07-22 17:19:30.618012] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123525 ] 00:41:29.199 [2024-07-22 17:19:30.804400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:29.765 [2024-07-22 17:19:31.082772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.765 [2024-07-22 17:19:31.082920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:29.765 [2024-07-22 17:19:31.083065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.765 [2024-07-22 17:19:31.083084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@862 -- # return 0 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.023 17:19:31 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.956 iscsi_tgt is listening. Running tests... 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:30.956 { 00:41:30.956 "cluster_name": "iscsi_rbd_cluster", 00:41:30.956 "config_file": "/etc/ceph/ceph.conf", 00:41:30.956 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:41:30.956 } 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:30.956 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.214 [2024-07-22 17:19:32.579865] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.214 [ 00:41:31.214 { 00:41:31.214 "name": "Ceph0", 00:41:31.214 "aliases": [ 00:41:31.214 "9d547f38-ff76-461f-ad01-2994ae5c6f0b" 00:41:31.214 ], 00:41:31.214 "product_name": "Ceph Rbd Disk", 00:41:31.214 "block_size": 4096, 00:41:31.214 "num_blocks": 256000, 00:41:31.214 "uuid": "9d547f38-ff76-461f-ad01-2994ae5c6f0b", 00:41:31.214 "assigned_rate_limits": { 00:41:31.214 "rw_ios_per_sec": 0, 00:41:31.214 "rw_mbytes_per_sec": 0, 00:41:31.214 "r_mbytes_per_sec": 0, 00:41:31.214 "w_mbytes_per_sec": 0 00:41:31.214 }, 00:41:31.214 "claimed": false, 00:41:31.214 "zoned": false, 00:41:31.214 "supported_io_types": { 00:41:31.214 "read": true, 00:41:31.214 "write": true, 00:41:31.214 "unmap": true, 00:41:31.214 "flush": true, 00:41:31.214 "reset": true, 00:41:31.214 "nvme_admin": false, 00:41:31.214 "nvme_io": false, 00:41:31.214 "nvme_io_md": false, 00:41:31.214 "write_zeroes": true, 00:41:31.214 "zcopy": false, 00:41:31.214 "get_zone_info": false, 00:41:31.214 "zone_management": false, 00:41:31.214 "zone_append": false, 00:41:31.214 "compare": false, 00:41:31.214 "compare_and_write": true, 00:41:31.214 "abort": false, 00:41:31.214 "seek_hole": false, 00:41:31.214 "seek_data": false, 00:41:31.214 "copy": false, 00:41:31.214 "nvme_iov_md": false 00:41:31.214 }, 00:41:31.214 "driver_specific": { 00:41:31.214 "rbd": { 00:41:31.214 "pool_name": "rbd", 00:41:31.214 "rbd_name": "foo", 00:41:31.214 "config_file": "/etc/ceph/ceph.conf", 00:41:31.214 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:41:31.214 } 00:41:31.214 } 00:41:31.214 } 00:41:31.214 ] 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:31.214 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.472 true 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:31.472 17:19:32 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:41:32.407 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:41:32.407 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:41:32.407 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:41:32.407 [2024-07-22 17:19:33.993695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:32.407 17:19:33 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:41:32.665 [global] 00:41:32.665 thread=1 00:41:32.665 invalidate=1 00:41:32.665 rw=randrw 00:41:32.665 time_based=1 00:41:32.665 runtime=1 00:41:32.665 ioengine=libaio 00:41:32.665 direct=1 00:41:32.665 bs=4096 00:41:32.665 iodepth=1 00:41:32.665 norandommap=0 00:41:32.665 numjobs=1 00:41:32.665 00:41:32.665 verify_dump=1 00:41:32.665 verify_backlog=512 00:41:32.665 verify_state_save=0 00:41:32.665 do_verify=1 00:41:32.665 verify=crc32c-intel 00:41:32.665 [job0] 00:41:32.665 filename=/dev/sda 00:41:32.665 queue_depth set to 113 (sda) 00:41:32.665 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:32.665 fio-3.35 00:41:32.665 Starting 1 thread 00:41:32.665 [2024-07-22 17:19:34.170362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:41:34.041 [2024-07-22 17:19:35.282989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:41:34.041 00:41:34.041 job0: (groupid=0, jobs=1): err= 0: pid=123655: Mon Jul 22 17:19:35 2024 00:41:34.041 read: IOPS=63, BW=255KiB/s (261kB/s)(256KiB/1004msec) 00:41:34.041 slat (nsec): min=11575, max=74865, avg=39370.61, stdev=15458.83 00:41:34.041 clat (usec): min=184, max=1991, avg=431.84, stdev=290.40 00:41:34.041 lat (usec): min=210, max=2042, avg=471.21, stdev=295.32 00:41:34.041 clat percentiles (usec): 00:41:34.041 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 241], 00:41:34.041 | 30.00th=[ 265], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 424], 00:41:34.041 | 70.00th=[ 441], 80.00th=[ 469], 90.00th=[ 627], 95.00th=[ 1057], 00:41:34.041 | 99.00th=[ 1991], 99.50th=[ 1991], 99.90th=[ 1991], 99.95th=[ 1991], 00:41:34.041 | 99.99th=[ 1991] 00:41:34.041 bw ( KiB/s): min= 231, max= 280, per=100.00%, avg=255.50, stdev=34.65, samples=2 00:41:34.041 iops : min= 57, max= 70, avg=63.50, stdev= 9.19, samples=2 00:41:34.041 write: IOPS=67, BW=271KiB/s (277kB/s)(272KiB/1004msec); 0 zone resets 00:41:34.041 slat (nsec): min=25016, max=83113, avg=47173.51, stdev=12305.39 00:41:34.041 clat (usec): min=4277, max=22972, avg=14245.54, stdev=3531.32 00:41:34.041 lat (usec): min=4304, max=23026, avg=14292.72, stdev=3531.62 00:41:34.041 clat percentiles (usec): 00:41:34.041 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[10028], 20.00th=[11994], 00:41:34.041 | 30.00th=[13435], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:41:34.041 | 70.00th=[15926], 80.00th=[16319], 90.00th=[17695], 95.00th=[19006], 00:41:34.041 | 99.00th=[22938], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:41:34.041 | 99.99th=[22938] 00:41:34.041 bw ( KiB/s): min= 263, max= 272, per=98.55%, avg=267.50, stdev= 6.36, samples=2 00:41:34.041 iops : min= 65, max= 68, avg=66.50, stdev= 2.12, samples=2 00:41:34.041 lat (usec) : 250=12.12%, 500=28.79%, 750=4.55% 00:41:34.041 lat (msec) : 2=3.03%, 10=4.55%, 20=45.45%, 50=1.52% 00:41:34.041 cpu : usr=0.40%, sys=0.50%, ctx=132, majf=0, minf=1 00:41:34.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.041 issued rwts: total=64,68,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.041 00:41:34.041 Run status group 0 (all jobs): 00:41:34.041 READ: bw=255KiB/s (261kB/s), 255KiB/s-255KiB/s (261kB/s-261kB/s), io=256KiB (262kB), run=1004-1004msec 00:41:34.041 WRITE: bw=271KiB/s (277kB/s), 271KiB/s-271KiB/s (277kB/s-277kB/s), io=272KiB (279kB), run=1004-1004msec 00:41:34.041 00:41:34.041 Disk stats (read/write): 00:41:34.041 sda: ios=103/60, merge=0/0, ticks=39/859, in_queue=899, util=91.21% 00:41:34.041 17:19:35 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:41:34.041 [global] 00:41:34.041 thread=1 00:41:34.041 invalidate=1 00:41:34.041 rw=randrw 00:41:34.041 time_based=1 00:41:34.041 runtime=1 00:41:34.041 ioengine=libaio 00:41:34.041 direct=1 00:41:34.041 bs=131072 00:41:34.041 iodepth=32 00:41:34.041 norandommap=0 00:41:34.041 numjobs=1 00:41:34.041 00:41:34.041 verify_dump=1 00:41:34.041 verify_backlog=512 00:41:34.041 verify_state_save=0 00:41:34.041 do_verify=1 00:41:34.041 verify=crc32c-intel 00:41:34.041 [job0] 00:41:34.041 filename=/dev/sda 00:41:34.041 queue_depth set to 113 (sda) 00:41:34.041 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:41:34.041 fio-3.35 00:41:34.041 Starting 1 thread 00:41:34.041 [2024-07-22 17:19:35.492441] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:41:35.996 [2024-07-22 17:19:37.086516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:41:35.996 00:41:35.996 job0: (groupid=0, jobs=1): err= 0: pid=123701: Mon Jul 22 17:19:37 2024 00:41:35.996 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(17.4MiB/1478msec) 00:41:35.996 slat (usec): min=9, max=1230, avg=63.97, stdev=151.20 00:41:35.996 clat (usec): min=3, max=57253, avg=1841.68, stdev=5006.77 00:41:35.996 lat (usec): min=299, max=57278, avg=1905.64, stdev=4997.74 00:41:35.996 clat percentiles (usec): 00:41:35.996 | 1.00th=[ 6], 5.00th=[ 293], 10.00th=[ 347], 20.00th=[ 408], 00:41:35.996 | 30.00th=[ 523], 40.00th=[ 627], 50.00th=[ 840], 60.00th=[ 1004], 00:41:35.996 | 70.00th=[ 1336], 80.00th=[ 2073], 90.00th=[ 4555], 95.00th=[ 5211], 00:41:35.996 | 99.00th=[ 8455], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:41:35.996 | 99.99th=[57410] 00:41:35.996 bw ( KiB/s): min= 9984, max=25600, per=100.00%, avg=17792.00, stdev=11042.18, samples=2 00:41:35.996 iops : min= 78, max= 200, avg=139.00, stdev=86.27, samples=2 00:41:35.996 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(16.4MiB/1478msec); 0 zone resets 00:41:35.996 slat (usec): min=46, max=996, avg=136.47, stdev=140.16 00:41:35.996 clat (msec): min=18, max=1135, avg=354.94, stdev=303.06 00:41:35.996 lat (msec): min=18, max=1135, avg=355.08, stdev=303.06 00:41:35.996 clat percentiles (msec): 00:41:35.996 | 1.00th=[ 20], 5.00th=[ 40], 10.00th=[ 75], 20.00th=[ 126], 00:41:35.996 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 153], 60.00th=[ 363], 00:41:35.996 | 70.00th=[ 514], 80.00th=[ 642], 90.00th=[ 827], 95.00th=[ 978], 00:41:35.996 | 99.00th=[ 1070], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:41:35.996 | 99.99th=[ 1133] 00:41:35.996 bw ( KiB/s): min= 6656, max=18944, per=100.00%, avg=12800.00, stdev=8688.93, samples=2 00:41:35.996 iops : min= 52, max= 148, avg=100.00, stdev=67.88, samples=2 00:41:35.996 lat (usec) : 4=0.37%, 10=0.74%, 500=12.96%, 750=10.00%, 1000=6.30% 00:41:35.996 lat (msec) : 2=10.74%, 4=4.07%, 10=5.93%, 20=0.74%, 50=2.22% 00:41:35.996 lat (msec) : 100=4.81%, 250=18.89%, 500=6.30%, 750=9.26%, 1000=4.44% 00:41:35.996 lat (msec) : 2000=2.22% 00:41:35.996 cpu : usr=0.47%, sys=0.68%, ctx=311, majf=0, minf=1 00:41:35.996 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=88.5%, >=64=0.0% 00:41:35.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:35.996 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.4%, 64=0.0%, >=64=0.0% 00:41:35.996 issued rwts: total=139,131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:35.996 latency : target=0, window=0, percentile=100.00%, depth=32 00:41:35.996 00:41:35.996 Run status group 0 (all jobs): 00:41:35.996 READ: bw=11.8MiB/s (12.3MB/s), 11.8MiB/s-11.8MiB/s (12.3MB/s-12.3MB/s), io=17.4MiB (18.2MB), run=1478-1478msec 00:41:35.996 WRITE: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=16.4MiB (17.2MB), run=1478-1478msec 00:41:35.996 00:41:35.996 Disk stats (read/write): 00:41:35.996 sda: ios=187/124, merge=0/0, ticks=247/35928, in_queue=36176, util=93.81% 00:41:35.996 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:41:35.996 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:41:35.996 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:41:35.996 Cleaning up iSCSI connection 00:41:35.996 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:41:35.996 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:41:35.996 Logging out of session [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:41:35.996 Logout of [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # rm -rf 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:35.997 [2024-07-22 17:19:37.204909] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 123525 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@948 -- # '[' -z 123525 ']' 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@952 -- # kill -0 123525 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # uname 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123525 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:35.997 killing process with pid 123525 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123525' 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@967 -- # kill 123525 00:41:35.997 17:19:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@972 -- # wait 123525 00:41:38.524 17:19:39 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:41:38.524 17:19:39 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:41:38.524 17:19:39 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:41:38.524 + base_dir=/var/tmp/ceph 00:41:38.524 + image=/var/tmp/ceph/ceph_raw.img 00:41:38.524 + dev=/dev/loop200 00:41:38.524 + pkill -9 ceph 00:41:38.524 + sleep 3 00:41:41.801 + umount /dev/loop200p2 00:41:41.801 umount: /dev/loop200p2: not mounted. 00:41:41.801 + losetup -d /dev/loop200 00:41:41.801 + rm -rf /var/tmp/ceph 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:41:41.801 ************************************ 00:41:41.801 END TEST iscsi_tgt_rbd 00:41:41.801 ************************************ 00:41:41.801 00:41:41.801 real 0m32.643s 00:41:41.801 user 0m34.859s 00:41:41.801 sys 0m2.190s 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:41:41.801 17:19:42 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:41:41.801 17:19:42 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:41:41.801 17:19:42 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:41:41.801 17:19:42 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:41:41.801 17:19:42 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:41.801 17:19:42 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:41.801 17:19:42 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:41:41.801 ************************************ 00:41:41.801 START TEST iscsi_tgt_initiator 00:41:41.801 ************************************ 00:41:41.801 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:41:41.801 * Looking for test storage... 00:41:41.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:41.802 iSCSI target launched. pid: 123842 00:41:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=123842 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 123842' 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 123842 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@829 -- # '[' -z 123842 ']' 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:41.802 17:19:42 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:41.802 [2024-07-22 17:19:43.122734] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:41.802 [2024-07-22 17:19:43.123641] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123842 ] 00:41:42.060 [2024-07-22 17:19:43.495316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.317 [2024-07-22 17:19:43.743780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@862 -- # return 0 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:42.575 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 iscsi_tgt is listening. Running tests... 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 Malloc0 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.510 17:19:44 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:41:44.444 17:19:45 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:41:44.444 17:19:45 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:41:44.444 17:19:45 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:41:44.444 17:19:45 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:41:44.444 [2024-07-22 17:19:46.053420] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:44.444 [2024-07-22 17:19:46.053651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123892 ] 00:41:45.011 [2024-07-22 17:19:46.392098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.271 [2024-07-22 17:19:46.680024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.530 Running I/O for 5 seconds... 00:41:50.793 00:41:50.794 Latency(us) 00:41:50.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.794 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:50.794 Verification LBA range: start 0x0 length 0x4000 00:41:50.794 iSCSI0 : 5.01 12500.65 48.83 0.00 0.00 10197.87 2457.60 9949.56 00:41:50.794 =================================================================================================================== 00:41:50.794 Total : 12500.65 48.83 0.00 0.00 10197.87 2457.60 9949.56 00:41:52.168 17:19:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:41:52.168 17:19:53 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:41:52.168 17:19:53 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:41:52.168 [2024-07-22 17:19:53.626195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:52.168 [2024-07-22 17:19:53.626451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124000 ] 00:41:52.439 [2024-07-22 17:19:53.972433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:52.710 [2024-07-22 17:19:54.215837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.968 Running I/O for 5 seconds... 00:41:58.235 00:41:58.235 Latency(us) 00:41:58.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:58.235 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:41:58.235 iSCSI0 : 5.00 23288.13 90.97 0.00 0.00 5490.15 1131.99 10307.03 00:41:58.235 =================================================================================================================== 00:41:58.235 Total : 23288.13 90.97 0.00 0.00 5490.15 1131.99 10307.03 00:41:59.626 17:20:01 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:41:59.626 17:20:01 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:41:59.626 17:20:01 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:41:59.626 [2024-07-22 17:20:01.152935] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:59.626 [2024-07-22 17:20:01.153159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124074 ] 00:42:00.192 [2024-07-22 17:20:01.518942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.192 [2024-07-22 17:20:01.763983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.451 Running I/O for 5 seconds... 00:42:05.725 00:42:05.725 Latency(us) 00:42:05.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:05.725 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:42:05.725 iSCSI0 : 5.00 43413.40 169.58 0.00 0.00 2944.59 1072.41 4736.47 00:42:05.725 =================================================================================================================== 00:42:05.725 Total : 43413.40 169.58 0.00 0.00 2944.59 1072.41 4736.47 00:42:07.098 17:20:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:42:07.098 17:20:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:42:07.098 17:20:08 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:42:07.098 [2024-07-22 17:20:08.683235] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:07.098 [2024-07-22 17:20:08.683435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124146 ] 00:42:07.664 [2024-07-22 17:20:09.028190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.664 [2024-07-22 17:20:09.270137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:08.230 Running I/O for 10 seconds... 00:42:18.249 00:42:18.249 Latency(us) 00:42:18.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.249 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:42:18.249 Verification LBA range: start 0x0 length 0x4000 00:42:18.249 iSCSI0 : 10.01 13054.12 50.99 0.00 0.00 9770.73 1891.61 7983.48 00:42:18.249 =================================================================================================================== 00:42:18.249 Total : 13054.12 50.99 0.00 0.00 9770.73 1891.61 7983.48 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 123842 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@948 -- # '[' -z 123842 ']' 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@952 -- # kill -0 123842 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # uname 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123842 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:19.629 killing process with pid 123842 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123842' 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@967 -- # kill 123842 00:42:19.629 17:20:21 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@972 -- # wait 123842 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:42:22.174 00:42:22.174 real 0m40.756s 00:42:22.174 user 1m1.170s 00:42:22.174 sys 0m11.202s 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:42:22.174 ************************************ 00:42:22.174 END TEST iscsi_tgt_initiator 00:42:22.174 ************************************ 00:42:22.174 17:20:23 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:42:22.174 17:20:23 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:42:22.174 17:20:23 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:22.174 17:20:23 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:22.174 17:20:23 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:42:22.174 ************************************ 00:42:22.174 START TEST iscsi_tgt_bdev_io_wait 00:42:22.174 ************************************ 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:42:22.174 * Looking for test storage... 00:42:22.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=124343 00:42:22.174 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 124343' 00:42:22.175 iSCSI target launched. pid: 124343 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 124343 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 124343 ']' 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:22.175 17:20:23 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:22.433 [2024-07-22 17:20:23.905071] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:22.433 [2024-07-22 17:20:23.905287] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124343 ] 00:42:22.691 [2024-07-22 17:20:24.247090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.977 [2024-07-22 17:20:24.482890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.267 17:20:24 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.835 iscsi_tgt is listening. Running tests... 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.835 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:24.093 Malloc0 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.093 17:20:25 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:42:25.029 17:20:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:42:25.029 17:20:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:42:25.029 17:20:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:42:25.029 17:20:26 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:42:25.287 [2024-07-22 17:20:26.671497] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:25.287 [2024-07-22 17:20:26.671703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124392 ] 00:42:25.287 [2024-07-22 17:20:26.838671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.550 [2024-07-22 17:20:27.096822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.144 Running I/O for 1 seconds... 00:42:27.116 00:42:27.116 Latency(us) 00:42:27.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.116 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:42:27.116 iSCSI0 : 1.01 19763.35 77.20 0.00 0.00 6455.02 1735.21 7864.32 00:42:27.116 =================================================================================================================== 00:42:27.116 Total : 19763.35 77.20 0.00 0.00 6455.02 1735.21 7864.32 00:42:28.505 17:20:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:42:28.505 17:20:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:42:28.505 17:20:29 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:42:28.505 [2024-07-22 17:20:29.888538] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:28.505 [2024-07-22 17:20:29.888809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124426 ] 00:42:28.505 [2024-07-22 17:20:30.064491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.762 [2024-07-22 17:20:30.347326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.327 Running I/O for 1 seconds... 00:42:30.259 00:42:30.259 Latency(us) 00:42:30.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.259 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:42:30.259 iSCSI0 : 1.00 25209.72 98.48 0.00 0.00 5061.36 1199.01 5659.93 00:42:30.259 =================================================================================================================== 00:42:30.259 Total : 25209.72 98.48 0.00 0.00 5061.36 1199.01 5659.93 00:42:31.633 17:20:32 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:42:31.633 17:20:32 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:42:31.633 17:20:32 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:42:31.633 [2024-07-22 17:20:33.110788] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:31.633 [2024-07-22 17:20:33.110987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124459 ] 00:42:31.895 [2024-07-22 17:20:33.280689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:32.154 [2024-07-22 17:20:33.600350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:32.413 Running I/O for 1 seconds... 00:42:33.789 00:42:33.789 Latency(us) 00:42:33.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.789 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:42:33.789 iSCSI0 : 1.00 31418.26 122.73 0.00 0.00 4063.97 1057.51 4617.31 00:42:33.789 =================================================================================================================== 00:42:33.789 Total : 31418.26 122.73 0.00 0.00 4063.97 1057.51 4617.31 00:42:34.736 17:20:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:42:34.736 17:20:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:42:34.736 17:20:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:42:34.995 [2024-07-22 17:20:36.393460] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:34.995 [2024-07-22 17:20:36.394329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124497 ] 00:42:34.995 [2024-07-22 17:20:36.584854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.560 [2024-07-22 17:20:36.893291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.819 Running I/O for 1 seconds... 00:42:36.755 00:42:36.755 Latency(us) 00:42:36.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.755 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:42:36.755 iSCSI0 : 1.01 18258.42 71.32 0.00 0.00 6989.21 1139.43 8757.99 00:42:36.755 =================================================================================================================== 00:42:36.755 Total : 18258.42 71.32 0.00 0.00 6989.21 1139.43 8757.99 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 124343 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 124343 ']' 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 124343 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124343 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:38.128 killing process with pid 124343 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124343' 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 124343 00:42:38.128 17:20:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 124343 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:42:40.654 00:42:40.654 real 0m18.163s 00:42:40.654 user 0m27.047s 00:42:40.654 sys 0m3.623s 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:40.654 ************************************ 00:42:40.654 END TEST iscsi_tgt_bdev_io_wait 00:42:40.654 ************************************ 00:42:40.654 17:20:41 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:42:40.654 17:20:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:42:40.654 17:20:41 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:40.654 17:20:41 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:40.654 17:20:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:42:40.654 ************************************ 00:42:40.654 START TEST iscsi_tgt_resize 00:42:40.654 ************************************ 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:42:40.654 * Looking for test storage... 00:42:40.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:42:40.654 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=124608 00:42:40.655 iSCSI target launched. pid: 124608 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 124608' 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 124608 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 124608 ']' 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:40.655 17:20:41 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:40.655 [2024-07-22 17:20:42.127720] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:40.655 [2024-07-22 17:20:42.127907] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124608 ] 00:42:40.914 [2024-07-22 17:20:42.466176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.172 [2024-07-22 17:20:42.769968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.739 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:41.739 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:42:41.739 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:42:41.739 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.739 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.305 iscsi_tgt is listening. Running tests... 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.305 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.305 Null0 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.306 17:20:43 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:42:43.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=124657 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 124657 /var/tmp/spdk-resize.sock 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 124657 ']' 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:42:43.283 17:20:44 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:43.542 [2024-07-22 17:20:45.016340] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:43.542 [2024-07-22 17:20:45.016597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124657 ] 00:42:43.800 [2024-07-22 17:20:45.241843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.058 [2024-07-22 17:20:45.538033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:44.625 [2024-07-22 17:20:45.947235] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:42:44.625 true 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:44.625 17:20:45 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.625 17:20:46 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:42:44.625 17:20:46 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:42:44.625 17:20:46 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:42:44.625 17:20:46 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:42:46.528 17:20:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:42:46.528 Running I/O for 5 seconds... 00:42:51.792 00:42:51.792 Latency(us) 00:42:51.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.792 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:42:51.792 iSCSI0 : 5.00 29992.17 117.16 0.00 0.00 529.90 292.31 1072.41 00:42:51.792 =================================================================================================================== 00:42:51.792 Total : 29992.17 117.16 0.00 0.00 529.90 292.31 1072.41 00:42:51.792 0 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 124657 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 124657 ']' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 124657 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124657 00:42:51.792 killing process with pid 124657 00:42:51.792 Received shutdown signal, test time was about 5.000000 seconds 00:42:51.792 00:42:51.792 Latency(us) 00:42:51.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.792 =================================================================================================================== 00:42:51.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124657' 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 124657 00:42:51.792 17:20:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 124657 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 124608 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 124608 ']' 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 124608 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124608 00:42:53.163 killing process with pid 124608 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124608' 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 124608 00:42:53.163 17:20:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 124608 00:42:55.711 17:20:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:42:55.711 17:20:57 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:42:55.711 00:42:55.711 real 0m15.233s 00:42:55.711 user 0m22.063s 00:42:55.711 sys 0m3.307s 00:42:55.711 17:20:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:55.711 17:20:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:42:55.711 ************************************ 00:42:55.711 END TEST iscsi_tgt_resize 00:42:55.711 ************************************ 00:42:55.711 17:20:57 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:42:55.711 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:42:55.969 17:20:57 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:42:55.969 17:20:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:42:55.969 00:42:55.969 real 24m8.239s 00:42:55.969 user 43m31.208s 00:42:55.969 sys 7m24.316s 00:42:55.969 17:20:57 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:55.969 17:20:57 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:42:55.969 ************************************ 00:42:55.969 END TEST iscsi_tgt 00:42:55.969 ************************************ 00:42:55.969 17:20:57 -- common/autotest_common.sh@1142 -- # return 0 00:42:55.969 17:20:57 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:42:55.969 17:20:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:55.969 17:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:55.969 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:42:55.969 ************************************ 00:42:55.969 START TEST spdkcli_iscsi 00:42:55.969 ************************************ 00:42:55.969 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:42:55.969 * Looking for test storage... 00:42:55.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:42:55.969 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:42:55.969 17:20:57 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:42:55.970 17:20:57 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=124889 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 124889 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 124889 ']' 00:42:55.970 17:20:57 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:55.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:55.970 17:20:57 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:42:56.228 [2024-07-22 17:20:57.641715] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:56.228 [2024-07-22 17:20:57.641939] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124889 ] 00:42:56.228 [2024-07-22 17:20:57.818408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:56.487 [2024-07-22 17:20:58.080418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.487 [2024-07-22 17:20:58.080419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:57.053 17:20:58 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:57.053 17:20:58 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:42:57.053 17:20:58 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:42:58.054 17:20:59 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:42:58.054 17:20:59 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:58.054 17:20:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:42:58.054 17:20:59 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:42:58.054 17:20:59 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:58.054 17:20:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:42:58.054 17:20:59 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:42:58.054 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:58.054 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:58.054 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:58.054 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:42:58.054 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:42:58.054 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:42:58.054 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:42:58.054 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:42:58.054 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:42:58.054 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:42:58.054 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:42:58.054 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:42:58.054 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:42:58.054 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:42:58.054 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:42:58.054 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:42:58.054 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:42:58.054 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:42:58.054 ' 00:43:06.167 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:43:06.167 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:06.167 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:06.167 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:06.167 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:43:06.167 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:43:06.167 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:43:06.167 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:43:06.167 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:43:06.167 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:43:06.167 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:43:06.167 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:43:06.167 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:43:06.167 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:43:06.167 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:43:06.167 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:43:06.167 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:43:06.167 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:43:06.167 Executing command: ['/iscsi ls', 'Malloc', True] 00:43:06.167 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:43:06.167 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:06.167 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:06.167 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:43:06.167 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:06.167 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:06.167 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:43:06.167 17:21:07 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:43:06.167 17:21:07 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:43:06.426 17:21:07 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:43:06.426 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:43:06.426 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:06.426 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:06.426 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:43:06.426 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:06.426 17:21:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:06.426 17:21:07 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:43:06.426 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:43:06.426 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:43:06.426 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:43:06.426 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:43:06.426 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:43:06.426 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:43:06.426 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:43:06.426 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:43:06.426 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:43:06.426 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:43:06.426 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:43:06.426 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:06.426 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:06.426 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:06.426 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:43:06.426 ' 00:43:14.540 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:43:14.540 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:43:14.540 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:43:14.540 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:43:14.540 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:43:14.540 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:43:14.540 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:43:14.540 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:43:14.540 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:43:14.540 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:43:14.540 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:43:14.540 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:43:14.540 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:14.540 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:14.540 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:14.540 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:43:14.540 17:21:14 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:14.540 17:21:14 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 124889 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 124889 ']' 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 124889 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124889 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:14.540 killing process with pid 124889 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124889' 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 124889 00:43:14.540 17:21:14 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 124889 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 124889 ']' 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 124889 00:43:15.917 17:21:17 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 124889 ']' 00:43:15.917 17:21:17 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 124889 00:43:15.917 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (124889) - No such process 00:43:15.917 Process with pid 124889 is not found 00:43:15.917 17:21:17 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 124889 is not found' 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:15.917 17:21:17 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:15.917 00:43:15.917 real 0m19.825s 00:43:15.917 user 0m41.050s 00:43:15.917 sys 0m1.498s 00:43:15.917 17:21:17 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:15.917 17:21:17 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:43:15.917 ************************************ 00:43:15.917 END TEST spdkcli_iscsi 00:43:15.917 ************************************ 00:43:15.917 17:21:17 -- common/autotest_common.sh@1142 -- # return 0 00:43:15.917 17:21:17 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:15.917 17:21:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:15.917 17:21:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:15.917 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:43:15.917 ************************************ 00:43:15.917 START TEST spdkcli_raid 00:43:15.917 ************************************ 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:15.917 * Looking for test storage... 00:43:15.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:43:15.917 17:21:17 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=125213 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 125213 00:43:15.917 17:21:17 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 125213 ']' 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:15.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:15.917 17:21:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:16.175 [2024-07-22 17:21:17.539534] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:16.175 [2024-07-22 17:21:17.539759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125213 ] 00:43:16.175 [2024-07-22 17:21:17.719710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:16.433 [2024-07-22 17:21:18.032287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.433 [2024-07-22 17:21:18.032299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:43:17.368 17:21:18 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:17.368 17:21:18 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:17.368 17:21:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:17.368 17:21:18 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:17.368 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:17.368 ' 00:43:19.270 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:43:19.270 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:43:19.270 17:21:20 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:43:19.270 17:21:20 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:19.270 17:21:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:19.270 17:21:20 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:43:19.270 17:21:20 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:19.270 17:21:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:19.270 17:21:20 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:43:19.270 ' 00:43:20.204 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:43:20.204 17:21:21 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:43:20.204 17:21:21 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:20.204 17:21:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:20.204 17:21:21 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:43:20.204 17:21:21 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:20.204 17:21:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:20.204 17:21:21 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:43:20.204 17:21:21 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:43:20.770 17:21:22 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:43:21.028 17:21:22 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:43:21.028 17:21:22 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:43:21.028 17:21:22 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:21.028 17:21:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:21.028 17:21:22 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:43:21.028 17:21:22 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:21.028 17:21:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:21.028 17:21:22 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:43:21.028 ' 00:43:21.963 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:43:21.963 17:21:23 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:43:21.963 17:21:23 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:21.963 17:21:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:22.221 17:21:23 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:43:22.222 17:21:23 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:22.222 17:21:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:22.222 17:21:23 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:43:22.222 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:43:22.222 ' 00:43:23.597 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:43:23.597 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:43:23.597 17:21:25 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:23.597 17:21:25 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 125213 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125213 ']' 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125213 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125213 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:23.597 killing process with pid 125213 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125213' 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 125213 00:43:23.597 17:21:25 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 125213 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 125213 ']' 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 125213 00:43:26.142 17:21:27 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125213 ']' 00:43:26.142 17:21:27 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125213 00:43:26.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125213) - No such process 00:43:26.142 Process with pid 125213 is not found 00:43:26.142 17:21:27 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 125213 is not found' 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:26.142 17:21:27 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:26.142 00:43:26.142 real 0m10.265s 00:43:26.142 user 0m20.885s 00:43:26.142 sys 0m1.131s 00:43:26.142 17:21:27 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:26.142 17:21:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:26.142 ************************************ 00:43:26.142 END TEST spdkcli_raid 00:43:26.142 ************************************ 00:43:26.142 17:21:27 -- common/autotest_common.sh@1142 -- # return 0 00:43:26.142 17:21:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@330 -- # '[' 1 -eq 1 ']' 00:43:26.142 17:21:27 -- spdk/autotest.sh@331 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:43:26.142 17:21:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:26.143 17:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:26.143 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:43:26.143 ************************************ 00:43:26.143 START TEST blockdev_rbd 00:43:26.143 ************************************ 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:43:26.143 * Looking for test storage... 00:43:26.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:26.143 17:21:27 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=125475 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:26.143 17:21:27 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 125475 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@829 -- # '[' -z 125475 ']' 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:26.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:26.143 17:21:27 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:26.401 [2024-07-22 17:21:27.828038] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:26.401 [2024-07-22 17:21:27.828237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125475 ] 00:43:26.401 [2024-07-22 17:21:27.992373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.969 [2024-07-22 17:21:28.284353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@862 -- # return 0 00:43:27.535 17:21:29 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:43:27.535 17:21:29 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:43:27.535 17:21:29 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:27.535 17:21:29 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:43:27.535 17:21:29 blockdev_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:43:27.793 + base_dir=/var/tmp/ceph 00:43:27.793 + image=/var/tmp/ceph/ceph_raw.img 00:43:27.793 + dev=/dev/loop200 00:43:27.793 + pkill -9 ceph 00:43:27.793 + sleep 3 00:43:31.074 + umount /dev/loop200p2 00:43:31.074 umount: /dev/loop200p2: no mount point specified. 00:43:31.074 + losetup -d /dev/loop200 00:43:31.074 losetup: /dev/loop200: detach failed: No such device or address 00:43:31.074 + rm -rf /var/tmp/ceph 00:43:31.074 17:21:32 blockdev_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:43:31.074 + set -e 00:43:31.074 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:43:31.074 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:43:31.074 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:43:31.074 + base_dir=/var/tmp/ceph 00:43:31.074 + mon_ip=127.0.0.1 00:43:31.074 + mon_dir=/var/tmp/ceph/mon.a 00:43:31.074 + pid_dir=/var/tmp/ceph/pid 00:43:31.074 + ceph_conf=/var/tmp/ceph/ceph.conf 00:43:31.074 + mnt_dir=/var/tmp/ceph/mnt 00:43:31.074 + image=/var/tmp/ceph_raw.img 00:43:31.074 + dev=/dev/loop200 00:43:31.074 + modprobe loop 00:43:31.074 + umount /dev/loop200p2 00:43:31.074 umount: /dev/loop200p2: no mount point specified. 00:43:31.074 + true 00:43:31.074 + losetup -d /dev/loop200 00:43:31.074 losetup: /dev/loop200: detach failed: No such device or address 00:43:31.074 + true 00:43:31.074 + '[' -d /var/tmp/ceph ']' 00:43:31.074 + mkdir /var/tmp/ceph 00:43:31.074 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:43:31.074 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:43:31.074 + fallocate -l 4G /var/tmp/ceph_raw.img 00:43:31.074 + mknod /dev/loop200 b 7 200 00:43:31.074 mknod: /dev/loop200: File exists 00:43:31.074 + true 00:43:31.074 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:43:31.074 Partitioning /dev/loop200 00:43:31.074 + PARTED='parted -s' 00:43:31.074 + SGDISK=sgdisk 00:43:31.074 + echo 'Partitioning /dev/loop200' 00:43:31.074 + parted -s /dev/loop200 mktable gpt 00:43:31.074 + sleep 2 00:43:33.008 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:43:33.008 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:43:33.008 + partno=0 00:43:33.008 Setting name on /dev/loop200 00:43:33.008 + echo 'Setting name on /dev/loop200' 00:43:33.008 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:43:33.942 Warning: The kernel is still using the old partition table. 00:43:33.942 The new table will be used at the next reboot or after you 00:43:33.942 run partprobe(8) or kpartx(8) 00:43:33.942 The operation has completed successfully. 00:43:33.942 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:43:34.875 Warning: The kernel is still using the old partition table. 00:43:34.875 The new table will be used at the next reboot or after you 00:43:34.875 run partprobe(8) or kpartx(8) 00:43:34.875 The operation has completed successfully. 00:43:34.875 + kpartx /dev/loop200 00:43:34.875 loop200p1 : 0 4192256 /dev/loop200 2048 00:43:34.875 loop200p2 : 0 4192256 /dev/loop200 4194304 00:43:34.875 ++ ceph -v 00:43:34.875 ++ awk '{print $3}' 00:43:35.133 + ceph_version=17.2.7 00:43:35.133 + ceph_maj=17 00:43:35.133 + '[' 17 -gt 12 ']' 00:43:35.133 + update_config=true 00:43:35.133 + rm -f /var/log/ceph/ceph-mon.a.log 00:43:35.133 + set_min_mon_release='--set-min-mon-release 14' 00:43:35.133 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:43:35.133 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:43:35.133 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:43:35.133 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:43:35.133 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:43:35.133 = sectsz=512 attr=2, projid32bit=1 00:43:35.133 = crc=1 finobt=1, sparse=1, rmapbt=0 00:43:35.133 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:43:35.133 data = bsize=4096 blocks=524032, imaxpct=25 00:43:35.133 = sunit=0 swidth=0 blks 00:43:35.133 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:43:35.133 log =internal log bsize=4096 blocks=16384, version=2 00:43:35.133 = sectsz=512 sunit=0 blks, lazy-count=1 00:43:35.133 realtime =none extsz=4096 blocks=0, rtextents=0 00:43:35.133 Discarding blocks...Done. 00:43:35.133 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:43:35.133 + cat 00:43:35.133 + rm -rf '/var/tmp/ceph/mon.a/*' 00:43:35.133 + mkdir -p /var/tmp/ceph/mon.a 00:43:35.133 + mkdir -p /var/tmp/ceph/pid 00:43:35.133 + rm -f /etc/ceph/ceph.client.admin.keyring 00:43:35.133 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:43:35.133 creating /var/tmp/ceph/keyring 00:43:35.133 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:43:35.133 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:43:35.133 monmaptool: monmap file /var/tmp/ceph/monmap 00:43:35.133 monmaptool: generated fsid cbbc233e-9b7d-43e5-9692-fe2305f2b5e3 00:43:35.133 setting min_mon_release = octopus 00:43:35.133 epoch 0 00:43:35.133 fsid cbbc233e-9b7d-43e5-9692-fe2305f2b5e3 00:43:35.133 last_changed 2024-07-22T17:21:36.675709+0000 00:43:35.133 created 2024-07-22T17:21:36.675709+0000 00:43:35.133 min_mon_release 15 (octopus) 00:43:35.133 election_strategy: 1 00:43:35.133 0: v2:127.0.0.1:12046/0 mon.a 00:43:35.133 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:43:35.133 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:43:35.391 + '[' true = true ']' 00:43:35.391 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:43:35.391 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:43:35.391 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:43:35.391 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:43:35.391 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:43:35.391 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:43:35.391 ++ hostname 00:43:35.391 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:43:35.391 + true 00:43:35.391 + '[' true = true ']' 00:43:35.391 + ceph-conf --name mon.a --show-config-value log_file 00:43:35.391 /var/log/ceph/ceph-mon.a.log 00:43:35.391 ++ ceph -s 00:43:35.391 ++ grep id 00:43:35.391 ++ awk '{print $2}' 00:43:35.651 + fsid=cbbc233e-9b7d-43e5-9692-fe2305f2b5e3 00:43:35.651 + sed -i 's/perf = true/perf = true\n\tfsid = cbbc233e-9b7d-43e5-9692-fe2305f2b5e3 \n/g' /var/tmp/ceph/ceph.conf 00:43:35.651 + (( ceph_maj < 18 )) 00:43:35.651 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:43:35.651 + cat /var/tmp/ceph/ceph.conf 00:43:35.651 [global] 00:43:35.651 debug_lockdep = 0/0 00:43:35.651 debug_context = 0/0 00:43:35.651 debug_crush = 0/0 00:43:35.651 debug_buffer = 0/0 00:43:35.651 debug_timer = 0/0 00:43:35.651 debug_filer = 0/0 00:43:35.651 debug_objecter = 0/0 00:43:35.651 debug_rados = 0/0 00:43:35.651 debug_rbd = 0/0 00:43:35.651 debug_ms = 0/0 00:43:35.651 debug_monc = 0/0 00:43:35.651 debug_tp = 0/0 00:43:35.651 debug_auth = 0/0 00:43:35.651 debug_finisher = 0/0 00:43:35.651 debug_heartbeatmap = 0/0 00:43:35.651 debug_perfcounter = 0/0 00:43:35.651 debug_asok = 0/0 00:43:35.651 debug_throttle = 0/0 00:43:35.651 debug_mon = 0/0 00:43:35.651 debug_paxos = 0/0 00:43:35.651 debug_rgw = 0/0 00:43:35.651 00:43:35.651 perf = true 00:43:35.651 osd objectstore = filestore 00:43:35.651 00:43:35.651 fsid = cbbc233e-9b7d-43e5-9692-fe2305f2b5e3 00:43:35.651 00:43:35.651 mutex_perf_counter = false 00:43:35.651 throttler_perf_counter = false 00:43:35.651 rbd cache = false 00:43:35.651 mon_allow_pool_delete = true 00:43:35.651 00:43:35.651 osd_pool_default_size = 1 00:43:35.651 00:43:35.651 [mon] 00:43:35.651 mon_max_pool_pg_num=166496 00:43:35.651 mon_osd_max_split_count = 10000 00:43:35.651 mon_pg_warn_max_per_osd = 10000 00:43:35.651 00:43:35.651 [osd] 00:43:35.651 osd_op_threads = 64 00:43:35.651 filestore_queue_max_ops=5000 00:43:35.651 filestore_queue_committing_max_ops=5000 00:43:35.651 journal_max_write_entries=1000 00:43:35.651 journal_queue_max_ops=3000 00:43:35.651 objecter_inflight_ops=102400 00:43:35.651 filestore_wbthrottle_enable=false 00:43:35.651 filestore_queue_max_bytes=1048576000 00:43:35.651 filestore_queue_committing_max_bytes=1048576000 00:43:35.651 journal_max_write_bytes=1048576000 00:43:35.651 journal_queue_max_bytes=1048576000 00:43:35.651 ms_dispatch_throttle_bytes=1048576000 00:43:35.651 objecter_inflight_op_bytes=1048576000 00:43:35.651 filestore_max_sync_interval=10 00:43:35.651 osd_client_message_size_cap = 0 00:43:35.651 osd_client_message_cap = 0 00:43:35.651 osd_enable_op_tracker = false 00:43:35.651 filestore_fd_cache_size = 10240 00:43:35.651 filestore_fd_cache_shards = 64 00:43:35.651 filestore_op_threads = 16 00:43:35.651 osd_op_num_shards = 48 00:43:35.651 osd_op_num_threads_per_shard = 2 00:43:35.651 osd_pg_object_context_cache_count = 10240 00:43:35.651 filestore_odsync_write = True 00:43:35.651 journal_dynamic_throttle = True 00:43:35.651 00:43:35.651 [osd.0] 00:43:35.651 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:43:35.651 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:43:35.651 00:43:35.651 # add mon address 00:43:35.651 [mon.a] 00:43:35.651 mon addr = v2:127.0.0.1:12046 00:43:35.651 + i=0 00:43:35.651 + mkdir -p /var/tmp/ceph/mnt 00:43:35.651 ++ uuidgen 00:43:35.651 + uuid=abf2813a-339e-4ad6-b914-971e6256cdc7 00:43:35.651 + ceph -c /var/tmp/ceph/ceph.conf osd create abf2813a-339e-4ad6-b914-971e6256cdc7 0 00:43:35.909 0 00:43:36.167 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid abf2813a-339e-4ad6-b914-971e6256cdc7 --check-needs-journal --no-mon-config 00:43:36.167 2024-07-22T17:21:37.567+0000 7fb314857400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:43:36.167 2024-07-22T17:21:37.567+0000 7fb314857400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:43:36.167 2024-07-22T17:21:37.618+0000 7fb314857400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected abf2813a-339e-4ad6-b914-971e6256cdc7, invalid (someone else's?) journal 00:43:36.167 2024-07-22T17:21:37.655+0000 7fb314857400 -1 journal do_read_entry(4096): bad header magic 00:43:36.167 2024-07-22T17:21:37.655+0000 7fb314857400 -1 journal do_read_entry(4096): bad header magic 00:43:36.167 ++ hostname 00:43:36.167 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:43:37.543 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:43:37.543 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:43:37.801 added key for osd.0 00:43:37.801 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:43:38.059 + class_dir=/lib64/rados-classes 00:43:38.059 + [[ -e /lib64/rados-classes ]] 00:43:38.059 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:43:38.315 + pkill -9 ceph-osd 00:43:38.315 + true 00:43:38.315 + sleep 2 00:43:40.216 + mkdir -p /var/tmp/ceph/pid 00:43:40.474 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:43:40.474 2024-07-22T17:21:41.879+0000 7f63f963e400 -1 Falling back to public interface 00:43:40.474 2024-07-22T17:21:41.932+0000 7f63f963e400 -1 journal do_read_entry(8192): bad header magic 00:43:40.474 2024-07-22T17:21:41.932+0000 7f63f963e400 -1 journal do_read_entry(8192): bad header magic 00:43:40.474 2024-07-22T17:21:41.941+0000 7f63f963e400 -1 osd.0 0 log_to_monitors true 00:43:41.410 17:21:42 blockdev_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:43:42.345 pool 'rbd' created 00:43:42.604 17:21:43 blockdev_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 [2024-07-22 17:21:49.152201] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:43:47.869 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:43:47.869 Ceph0 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "ecac2e0b-6e08-4218-86dd-35815edaa26a"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "ecac2e0b-6e08-4218-86dd-35815edaa26a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:43:47.869 17:21:49 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 125475 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@948 -- # '[' -z 125475 ']' 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@952 -- # kill -0 125475 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@953 -- # uname 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125475 00:43:47.869 killing process with pid 125475 00:43:47.869 17:21:49 blockdev_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:47.870 17:21:49 blockdev_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:47.870 17:21:49 blockdev_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125475' 00:43:47.870 17:21:49 blockdev_rbd -- common/autotest_common.sh@967 -- # kill 125475 00:43:47.870 17:21:49 blockdev_rbd -- common/autotest_common.sh@972 -- # wait 125475 00:43:50.400 17:21:51 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:50.400 17:21:51 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:43:50.400 17:21:51 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:50.400 17:21:51 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:50.400 17:21:51 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:50.400 ************************************ 00:43:50.400 START TEST bdev_hello_world 00:43:50.400 ************************************ 00:43:50.400 17:21:51 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:43:50.400 [2024-07-22 17:21:51.953388] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:50.400 [2024-07-22 17:21:51.953610] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126362 ] 00:43:50.658 [2024-07-22 17:21:52.131558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:50.916 [2024-07-22 17:21:52.437981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.483 [2024-07-22 17:21:52.913699] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:43:51.483 [2024-07-22 17:21:52.927616] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:51.483 [2024-07-22 17:21:52.927700] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:43:51.483 [2024-07-22 17:21:52.927733] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:51.483 [2024-07-22 17:21:52.930201] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:51.483 [2024-07-22 17:21:52.947351] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:51.483 [2024-07-22 17:21:52.947456] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:51.483 [2024-07-22 17:21:52.951907] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:51.483 00:43:51.483 [2024-07-22 17:21:52.951974] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:52.858 00:43:52.858 real 0m2.431s 00:43:52.858 user 0m1.960s 00:43:52.858 sys 0m0.348s 00:43:52.858 17:21:54 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:52.858 17:21:54 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:52.858 ************************************ 00:43:52.858 END TEST bdev_hello_world 00:43:52.858 ************************************ 00:43:52.858 17:21:54 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:43:52.858 17:21:54 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:43:52.858 17:21:54 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:52.858 17:21:54 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:52.858 17:21:54 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:52.858 ************************************ 00:43:52.858 START TEST bdev_bounds 00:43:52.858 ************************************ 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=126425 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:52.858 Process bdevio pid: 126425 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 126425' 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 126425 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 126425 ']' 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:52.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:52.858 17:21:54 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:52.858 [2024-07-22 17:21:54.443264] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:52.858 [2024-07-22 17:21:54.443572] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126425 ] 00:43:53.116 [2024-07-22 17:21:54.620439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:53.375 [2024-07-22 17:21:54.887446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.375 [2024-07-22 17:21:54.888016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:53.375 [2024-07-22 17:21:54.888028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.942 [2024-07-22 17:21:55.361404] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:43:53.942 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:53.942 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:43:53.942 17:21:55 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:53.942 I/O targets: 00:43:53.942 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:43:53.942 00:43:53.942 00:43:53.942 CUnit - A unit testing framework for C - Version 2.1-3 00:43:53.942 http://cunit.sourceforge.net/ 00:43:53.942 00:43:53.942 00:43:53.942 Suite: bdevio tests on: Ceph0 00:43:53.942 Test: blockdev write read block ...passed 00:43:53.942 Test: blockdev write zeroes read block ...passed 00:43:53.942 Test: blockdev write zeroes read no split ...passed 00:43:54.200 Test: blockdev write zeroes read split ...passed 00:43:54.200 Test: blockdev write zeroes read split partial ...passed 00:43:54.200 Test: blockdev reset ...passed 00:43:54.200 Test: blockdev write read 8 blocks ...passed 00:43:54.200 Test: blockdev write read size > 128k ...passed 00:43:54.200 Test: blockdev write read invalid size ...passed 00:43:54.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:54.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:54.200 Test: blockdev write read max offset ...passed 00:43:54.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:54.200 Test: blockdev writev readv 8 blocks ...passed 00:43:54.200 Test: blockdev writev readv 30 x 1block ...passed 00:43:54.200 Test: blockdev writev readv block ...passed 00:43:54.200 Test: blockdev writev readv size > 128k ...passed 00:43:54.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:54.200 Test: blockdev comparev and writev ...passed 00:43:54.200 Test: blockdev nvme passthru rw ...passed 00:43:54.200 Test: blockdev nvme passthru vendor specific ...passed 00:43:54.200 Test: blockdev nvme admin passthru ...passed 00:43:54.200 Test: blockdev copy ...passed 00:43:54.200 00:43:54.200 Run Summary: Type Total Ran Passed Failed Inactive 00:43:54.200 suites 1 1 n/a 0 0 00:43:54.200 tests 23 23 23 0 0 00:43:54.200 asserts 130 130 130 0 n/a 00:43:54.200 00:43:54.201 Elapsed time = 0.558 seconds 00:43:54.201 0 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 126425 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 126425 ']' 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 126425 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126425 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:54.201 killing process with pid 126425 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126425' 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@967 -- # kill 126425 00:43:54.201 17:21:55 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@972 -- # wait 126425 00:43:55.575 17:21:57 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:55.575 00:43:55.575 real 0m2.774s 00:43:55.575 user 0m6.139s 00:43:55.575 sys 0m0.454s 00:43:55.575 17:21:57 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:55.575 17:21:57 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:55.575 ************************************ 00:43:55.575 END TEST bdev_bounds 00:43:55.575 ************************************ 00:43:55.575 17:21:57 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:43:55.575 17:21:57 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:43:55.575 17:21:57 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:55.575 17:21:57 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:55.575 17:21:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:43:55.575 ************************************ 00:43:55.575 START TEST bdev_nbd 00:43:55.575 ************************************ 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=126504 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 126504 /var/tmp/spdk-nbd.sock 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 126504 ']' 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:55.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:55.575 17:21:57 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:55.834 [2024-07-22 17:21:57.277223] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:55.834 [2024-07-22 17:21:57.277477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:56.092 [2024-07-22 17:21:57.452604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.350 [2024-07-22 17:21:57.747493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.917 [2024-07-22 17:21:58.232020] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:56.917 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:57.176 1+0 records in 00:43:57.176 1+0 records out 00:43:57.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924614 s, 4.4 MB/s 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:57.176 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:57.435 { 00:43:57.435 "nbd_device": "/dev/nbd0", 00:43:57.435 "bdev_name": "Ceph0" 00:43:57.435 } 00:43:57.435 ]' 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:57.435 { 00:43:57.435 "nbd_device": "/dev/nbd0", 00:43:57.435 "bdev_name": "Ceph0" 00:43:57.435 } 00:43:57.435 ]' 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:57.435 17:21:58 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:57.693 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:43:57.951 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:57.952 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:43:58.210 /dev/nbd0 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:58.210 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:58.468 1+0 records in 00:43:58.468 1+0 records out 00:43:58.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904281 s, 4.5 MB/s 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:58.468 17:21:59 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:58.727 { 00:43:58.727 "nbd_device": "/dev/nbd0", 00:43:58.727 "bdev_name": "Ceph0" 00:43:58.727 } 00:43:58.727 ]' 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:58.727 { 00:43:58.727 "nbd_device": "/dev/nbd0", 00:43:58.727 "bdev_name": "Ceph0" 00:43:58.727 } 00:43:58.727 ]' 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:58.727 256+0 records in 00:43:58.727 256+0 records out 00:43:58.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00923192 s, 114 MB/s 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:58.727 17:22:00 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:44:00.103 256+0 records in 00:44:00.103 256+0 records out 00:44:00.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.2106 s, 866 kB/s 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:00.103 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:00.361 17:22:01 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:44:00.620 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:44:00.877 malloc_lvol_verify 00:44:00.877 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:44:01.135 b6ec3ba0-84fb-4afd-9788-88f6bc13dce6 00:44:01.135 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:44:01.393 9edd9018-14d6-4664-8fea-f52b866f3f11 00:44:01.393 17:22:02 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:44:01.651 /dev/nbd0 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:44:01.908 mke2fs 1.46.5 (30-Dec-2021) 00:44:01.908 Discarding device blocks: 0/4096 done 00:44:01.908 Creating filesystem with 4096 1k blocks and 1024 inodes 00:44:01.908 00:44:01.908 Allocating group tables: 0/1 done 00:44:01.908 Writing inode tables: 0/1 done 00:44:01.908 Creating journal (1024 blocks): done 00:44:01.908 Writing superblocks and filesystem accounting information: 0/1 done 00:44:01.908 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:01.908 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:01.909 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 126504 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 126504 ']' 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 126504 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:02.166 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126504 00:44:02.167 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:02.167 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:02.167 killing process with pid 126504 00:44:02.167 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126504' 00:44:02.167 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@967 -- # kill 126504 00:44:02.167 17:22:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@972 -- # wait 126504 00:44:03.541 17:22:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:44:03.541 00:44:03.541 real 0m7.838s 00:44:03.541 user 0m10.457s 00:44:03.541 sys 0m1.938s 00:44:03.541 17:22:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:03.541 17:22:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:03.541 ************************************ 00:44:03.541 END TEST bdev_nbd 00:44:03.541 ************************************ 00:44:03.541 17:22:04 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:44:03.541 17:22:04 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:44:03.541 17:22:04 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:44:03.541 17:22:04 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:44:03.541 17:22:05 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:44:03.541 17:22:05 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:03.541 17:22:05 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:03.541 17:22:05 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:03.541 ************************************ 00:44:03.541 START TEST bdev_fio 00:44:03.541 ************************************ 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:44:03.541 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:03.541 ************************************ 00:44:03.541 START TEST bdev_fio_rw_verify 00:44:03.541 ************************************ 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:03.541 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:03.542 17:22:05 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:03.800 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:44:03.800 fio-3.35 00:44:03.800 Starting 1 thread 00:44:16.019 00:44:16.019 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=126755: Mon Jul 22 17:22:16 2024 00:44:16.019 read: IOPS=490, BW=1962KiB/s (2009kB/s)(20.0MiB/10457msec) 00:44:16.019 slat (usec): min=5, max=1150, avg=20.77, stdev=31.67 00:44:16.019 clat (usec): min=383, max=562196, avg=5572.22, stdev=40075.64 00:44:16.019 lat (usec): min=425, max=562234, avg=5592.99, stdev=40075.75 00:44:16.019 clat percentiles (usec): 00:44:16.019 | 50.000th=[ 1319], 99.000th=[114820], 99.900th=[557843], 00:44:16.019 | 99.990th=[566232], 99.999th=[566232] 00:44:16.019 write: IOPS=588, BW=2354KiB/s (2410kB/s)(24.0MiB/10457msec); 0 zone resets 00:44:16.019 slat (usec): min=20, max=963, avg=55.71, stdev=43.14 00:44:16.019 clat (msec): min=2, max=152, avg= 8.77, stdev=16.74 00:44:16.019 lat (msec): min=2, max=152, avg= 8.83, stdev=16.74 00:44:16.019 clat percentiles (msec): 00:44:16.019 | 50.000th=[ 6], 99.000th=[ 95], 99.900th=[ 140], 99.990th=[ 153], 00:44:16.019 | 99.999th=[ 153] 00:44:16.020 bw ( KiB/s): min= 72, max= 6464, per=100.00%, avg=2895.53, stdev=1868.22, samples=17 00:44:16.020 iops : min= 18, max= 1616, avg=723.88, stdev=467.05, samples=17 00:44:16.020 lat (usec) : 500=0.11%, 750=1.07%, 1000=5.19% 00:44:16.020 lat (msec) : 2=35.73%, 4=9.94%, 10=44.07%, 20=0.73%, 50=0.35% 00:44:16.020 lat (msec) : 100=1.93%, 250=0.59%, 500=0.17%, 750=0.13% 00:44:16.020 cpu : usr=96.92%, sys=1.41%, ctx=664, majf=0, minf=16067 00:44:16.020 IO depths : 1=0.1%, 2=0.2%, 4=14.5%, 8=85.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.020 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.020 issued rwts: total=5129,6153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:16.020 00:44:16.020 Run status group 0 (all jobs): 00:44:16.020 READ: bw=1962KiB/s (2009kB/s), 1962KiB/s-1962KiB/s (2009kB/s-2009kB/s), io=20.0MiB (21.0MB), run=10457-10457msec 00:44:16.020 WRITE: bw=2354KiB/s (2410kB/s), 2354KiB/s-2354KiB/s (2410kB/s-2410kB/s), io=24.0MiB (25.2MB), run=10457-10457msec 00:44:16.955 ----------------------------------------------------- 00:44:16.955 Suppressions used: 00:44:16.955 count bytes template 00:44:16.955 1 6 /usr/src/fio/parse.c 00:44:16.955 1015 97440 /usr/src/fio/iolog.c 00:44:16.955 1 8 libtcmalloc_minimal.so 00:44:16.955 1 904 libcrypto.so 00:44:16.955 ----------------------------------------------------- 00:44:16.955 00:44:16.955 00:44:16.955 real 0m13.253s 00:44:16.955 user 0m13.848s 00:44:16.955 sys 0m2.033s 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:44:16.955 ************************************ 00:44:16.955 END TEST bdev_fio_rw_verify 00:44:16.955 ************************************ 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "ecac2e0b-6e08-4218-86dd-35815edaa26a"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "ecac2e0b-6e08-4218-86dd-35815edaa26a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "ecac2e0b-6e08-4218-86dd-35815edaa26a"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "ecac2e0b-6e08-4218-86dd-35815edaa26a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:16.955 17:22:18 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:16.955 ************************************ 00:44:16.955 START TEST bdev_fio_trim 00:44:16.955 ************************************ 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:16.956 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:17.214 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:17.215 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:17.215 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:44:17.215 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:17.215 17:22:18 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:17.215 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:44:17.215 fio-3.35 00:44:17.215 Starting 1 thread 00:44:29.442 00:44:29.442 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=126950: Mon Jul 22 17:22:29 2024 00:44:29.442 write: IOPS=767, BW=3070KiB/s (3144kB/s)(30.0MiB/10001msec); 0 zone resets 00:44:29.442 slat (usec): min=8, max=651, avg=41.94, stdev=48.85 00:44:29.442 clat (msec): min=2, max=288, avg=10.18, stdev= 9.11 00:44:29.442 lat (msec): min=2, max=288, avg=10.22, stdev= 9.11 00:44:29.442 clat percentiles (msec): 00:44:29.442 | 50.000th=[ 11], 99.000th=[ 18], 99.900th=[ 190], 99.990th=[ 288], 00:44:29.442 | 99.999th=[ 288] 00:44:29.442 bw ( KiB/s): min= 1056, max= 4352, per=100.00%, avg=3077.89, stdev=668.61, samples=19 00:44:29.442 iops : min= 264, max= 1088, avg=769.47, stdev=167.15, samples=19 00:44:29.442 trim: IOPS=767, BW=3070KiB/s (3144kB/s)(30.0MiB/10001msec); 0 zone resets 00:44:29.442 slat (usec): min=5, max=2303, avg=21.75, stdev=44.44 00:44:29.442 clat (usec): min=5, max=13387, avg=161.14, stdev=322.95 00:44:29.442 lat (usec): min=20, max=13395, avg=182.89, stdev=325.05 00:44:29.442 clat percentiles (usec): 00:44:29.442 | 50.000th=[ 122], 99.000th=[ 594], 99.900th=[ 1074], 99.990th=[13435], 00:44:29.442 | 99.999th=[13435] 00:44:29.442 bw ( KiB/s): min= 1056, max= 4352, per=100.00%, avg=3081.26, stdev=672.13, samples=19 00:44:29.442 iops : min= 264, max= 1088, avg=770.32, stdev=168.03, samples=19 00:44:29.442 lat (usec) : 10=0.21%, 20=0.79%, 50=7.05%, 100=12.97%, 250=20.03% 00:44:29.442 lat (usec) : 500=8.08%, 750=0.66%, 1000=0.13% 00:44:29.442 lat (msec) : 2=0.04%, 4=0.92%, 10=22.72%, 20=26.03%, 50=0.21% 00:44:29.442 lat (msec) : 100=0.07%, 250=0.07%, 500=0.02% 00:44:29.442 cpu : usr=96.19%, sys=1.84%, ctx=1200, majf=0, minf=20374 00:44:29.442 IO depths : 1=0.1%, 2=0.2%, 4=18.5%, 8=81.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:29.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.442 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.442 issued rwts: total=0,7677,7677,0 short=0,0,0,0 dropped=0,0,0,0 00:44:29.442 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:29.442 00:44:29.442 Run status group 0 (all jobs): 00:44:29.442 WRITE: bw=3070KiB/s (3144kB/s), 3070KiB/s-3070KiB/s (3144kB/s-3144kB/s), io=30.0MiB (31.4MB), run=10001-10001msec 00:44:29.442 TRIM: bw=3070KiB/s (3144kB/s), 3070KiB/s-3070KiB/s (3144kB/s-3144kB/s), io=30.0MiB (31.4MB), run=10001-10001msec 00:44:30.008 ----------------------------------------------------- 00:44:30.008 Suppressions used: 00:44:30.008 count bytes template 00:44:30.008 1 6 /usr/src/fio/parse.c 00:44:30.008 1 8 libtcmalloc_minimal.so 00:44:30.008 1 904 libcrypto.so 00:44:30.008 ----------------------------------------------------- 00:44:30.008 00:44:30.008 00:44:30.008 real 0m12.848s 00:44:30.008 user 0m13.045s 00:44:30.008 sys 0m1.413s 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:44:30.008 ************************************ 00:44:30.008 END TEST bdev_fio_trim 00:44:30.008 ************************************ 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:30.008 /home/vagrant/spdk_repo/spdk 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:44:30.008 00:44:30.008 real 0m26.422s 00:44:30.008 user 0m27.073s 00:44:30.008 sys 0m3.573s 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:30.008 17:22:31 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:30.008 ************************************ 00:44:30.008 END TEST bdev_fio 00:44:30.008 ************************************ 00:44:30.008 17:22:31 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:44:30.008 17:22:31 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:30.008 17:22:31 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:30.008 17:22:31 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:30.008 17:22:31 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:30.008 17:22:31 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:30.008 ************************************ 00:44:30.008 START TEST bdev_verify 00:44:30.008 ************************************ 00:44:30.008 17:22:31 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:30.281 [2024-07-22 17:22:31.649802] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:30.282 [2024-07-22 17:22:31.649999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127093 ] 00:44:30.282 [2024-07-22 17:22:31.816906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:30.546 [2024-07-22 17:22:32.109917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:30.546 [2024-07-22 17:22:32.109920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:33.831 [2024-07-22 17:22:35.001024] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:44:33.831 Running I/O for 5 seconds... 00:44:39.097 00:44:39.097 Latency(us) 00:44:39.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:39.097 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:39.097 Verification LBA range: start 0x0 length 0x1f400 00:44:39.097 Ceph0 : 5.05 2009.38 7.85 0.00 0.00 63310.75 3112.96 766413.73 00:44:39.097 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:39.097 Verification LBA range: start 0x1f400 length 0x1f400 00:44:39.097 Ceph0 : 5.03 2247.74 8.78 0.00 0.00 56682.88 3708.74 674901.64 00:44:39.097 =================================================================================================================== 00:44:39.097 Total : 4257.11 16.63 0.00 0.00 59817.06 3112.96 766413.73 00:44:40.030 00:44:40.030 real 0m10.034s 00:44:40.030 user 0m18.214s 00:44:40.030 sys 0m2.181s 00:44:40.030 17:22:41 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:40.030 17:22:41 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:40.030 ************************************ 00:44:40.030 END TEST bdev_verify 00:44:40.030 ************************************ 00:44:40.030 17:22:41 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:44:40.030 17:22:41 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:40.030 17:22:41 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:40.030 17:22:41 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:40.030 17:22:41 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:40.030 ************************************ 00:44:40.030 START TEST bdev_verify_big_io 00:44:40.030 ************************************ 00:44:40.030 17:22:41 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:40.288 [2024-07-22 17:22:41.727057] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:40.288 [2024-07-22 17:22:41.727333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127224 ] 00:44:40.546 [2024-07-22 17:22:41.901578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:40.804 [2024-07-22 17:22:42.168755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.804 [2024-07-22 17:22:42.168770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.061 [2024-07-22 17:22:42.644985] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:44:41.319 Running I/O for 5 seconds... 00:44:46.584 00:44:46.584 Latency(us) 00:44:46.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:46.584 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:46.584 Verification LBA range: start 0x0 length 0x1f40 00:44:46.584 Ceph0 : 5.10 618.23 38.64 0.00 0.00 202003.97 2651.23 396552.38 00:44:46.584 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:46.584 Verification LBA range: start 0x1f40 length 0x1f40 00:44:46.584 Ceph0 : 5.09 578.22 36.14 0.00 0.00 216359.40 4617.31 438495.42 00:44:46.584 =================================================================================================================== 00:44:46.584 Total : 1196.46 74.78 0.00 0.00 208937.99 2651.23 438495.42 00:44:47.959 00:44:47.959 real 0m7.559s 00:44:47.959 user 0m14.655s 00:44:47.959 sys 0m1.345s 00:44:47.959 17:22:49 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:47.959 17:22:49 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:47.959 ************************************ 00:44:47.959 END TEST bdev_verify_big_io 00:44:47.959 ************************************ 00:44:47.959 17:22:49 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:44:47.960 17:22:49 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:47.960 17:22:49 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:47.960 17:22:49 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:47.960 17:22:49 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:47.960 ************************************ 00:44:47.960 START TEST bdev_write_zeroes 00:44:47.960 ************************************ 00:44:47.960 17:22:49 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:47.960 [2024-07-22 17:22:49.348600] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:47.960 [2024-07-22 17:22:49.348835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127338 ] 00:44:47.960 [2024-07-22 17:22:49.525196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.218 [2024-07-22 17:22:49.785407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.784 [2024-07-22 17:22:50.254834] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:44:48.784 Running I/O for 1 seconds... 00:44:50.684 00:44:50.684 Latency(us) 00:44:50.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.684 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:50.684 Ceph0 : 1.52 3465.28 13.54 0.00 0.00 33485.87 6047.19 552885.53 00:44:50.684 =================================================================================================================== 00:44:50.684 Total : 3465.28 13.54 0.00 0.00 33485.87 6047.19 552885.53 00:44:51.627 00:44:51.627 real 0m4.005s 00:44:51.627 user 0m3.967s 00:44:51.627 sys 0m0.781s 00:44:51.627 17:22:53 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:51.627 17:22:53 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:51.627 ************************************ 00:44:51.627 END TEST bdev_write_zeroes 00:44:51.627 ************************************ 00:44:51.884 17:22:53 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:44:51.884 17:22:53 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:51.884 17:22:53 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:51.884 17:22:53 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:51.884 17:22:53 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:51.884 ************************************ 00:44:51.884 START TEST bdev_json_nonenclosed 00:44:51.884 ************************************ 00:44:51.884 17:22:53 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:51.884 [2024-07-22 17:22:53.407338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:51.884 [2024-07-22 17:22:53.407541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127417 ] 00:44:52.141 [2024-07-22 17:22:53.575034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.400 [2024-07-22 17:22:53.893594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:52.400 [2024-07-22 17:22:53.893715] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:52.400 [2024-07-22 17:22:53.893749] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:52.400 [2024-07-22 17:22:53.893768] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:52.966 00:44:52.966 real 0m1.100s 00:44:52.966 user 0m0.830s 00:44:52.966 sys 0m0.160s 00:44:52.966 17:22:54 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:44:52.966 17:22:54 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:52.966 ************************************ 00:44:52.966 END TEST bdev_json_nonenclosed 00:44:52.966 ************************************ 00:44:52.966 17:22:54 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:52.966 17:22:54 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:44:52.966 17:22:54 blockdev_rbd -- bdev/blockdev.sh@781 -- # true 00:44:52.966 17:22:54 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:52.966 17:22:54 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:52.966 17:22:54 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:52.966 17:22:54 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:52.966 ************************************ 00:44:52.966 START TEST bdev_json_nonarray 00:44:52.966 ************************************ 00:44:52.966 17:22:54 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:52.966 [2024-07-22 17:22:54.571941] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:52.966 [2024-07-22 17:22:54.572142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127447 ] 00:44:53.224 [2024-07-22 17:22:54.741771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:53.483 [2024-07-22 17:22:55.053447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:53.483 [2024-07-22 17:22:55.053629] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:53.483 [2024-07-22 17:22:55.053685] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:53.483 [2024-07-22 17:22:55.053719] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:54.049 00:44:54.049 real 0m1.118s 00:44:54.049 user 0m0.832s 00:44:54.049 sys 0m0.174s 00:44:54.049 17:22:55 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:44:54.049 17:22:55 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:54.049 17:22:55 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:54.049 ************************************ 00:44:54.049 END TEST bdev_json_nonarray 00:44:54.049 ************************************ 00:44:54.049 17:22:55 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@784 -- # true 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:44:54.049 17:22:55 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:44:54.049 17:22:55 blockdev_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:44:54.049 17:22:55 blockdev_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:44:54.049 + base_dir=/var/tmp/ceph 00:44:54.049 + image=/var/tmp/ceph/ceph_raw.img 00:44:54.049 + dev=/dev/loop200 00:44:54.049 + pkill -9 ceph 00:44:54.049 + sleep 3 00:44:57.335 + umount /dev/loop200p2 00:44:57.335 + losetup -d /dev/loop200 00:44:57.335 + rm -rf /var/tmp/ceph 00:44:57.335 17:22:58 blockdev_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:44:57.592 17:22:59 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:44:57.592 17:22:59 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:44:57.592 17:22:59 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:44:57.592 00:44:57.592 real 1m31.478s 00:44:57.592 user 1m50.496s 00:44:57.592 sys 0m13.045s 00:44:57.592 ************************************ 00:44:57.592 END TEST blockdev_rbd 00:44:57.592 ************************************ 00:44:57.592 17:22:59 blockdev_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:57.592 17:22:59 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:57.592 17:22:59 -- common/autotest_common.sh@1142 -- # return 0 00:44:57.592 17:22:59 -- spdk/autotest.sh@332 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:44:57.592 17:22:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:57.592 17:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:57.592 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:44:57.592 ************************************ 00:44:57.592 START TEST spdkcli_rbd 00:44:57.592 ************************************ 00:44:57.592 17:22:59 spdkcli_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:44:57.592 * Looking for test storage... 00:44:57.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=127561 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:44:57.851 17:22:59 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 127561 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@829 -- # '[' -z 127561 ']' 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:57.851 17:22:59 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:57.851 [2024-07-22 17:22:59.389408] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:57.851 [2024-07-22 17:22:59.389630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127561 ] 00:44:58.110 [2024-07-22 17:22:59.565867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:58.368 [2024-07-22 17:22:59.851910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:58.368 [2024-07-22 17:22:59.851910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@862 -- # return 0 00:44:59.304 17:23:00 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:59.304 17:23:00 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:44:59.304 17:23:00 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:44:59.304 17:23:00 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:44:59.304 + base_dir=/var/tmp/ceph 00:44:59.304 + image=/var/tmp/ceph/ceph_raw.img 00:44:59.304 + dev=/dev/loop200 00:44:59.304 + pkill -9 ceph 00:44:59.304 + sleep 3 00:45:02.585 + umount /dev/loop200p2 00:45:02.585 umount: /dev/loop200p2: no mount point specified. 00:45:02.585 + losetup -d /dev/loop200 00:45:02.585 losetup: /dev/loop200: detach failed: No such device or address 00:45:02.585 + rm -rf /var/tmp/ceph 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:45:02.585 17:23:03 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:45:02.585 17:23:03 spdkcli_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:45:02.585 + base_dir=/var/tmp/ceph 00:45:02.585 + image=/var/tmp/ceph/ceph_raw.img 00:45:02.585 + dev=/dev/loop200 00:45:02.585 + pkill -9 ceph 00:45:02.585 + sleep 3 00:45:05.869 + umount /dev/loop200p2 00:45:05.869 umount: /dev/loop200p2: no mount point specified. 00:45:05.869 + losetup -d /dev/loop200 00:45:05.869 losetup: /dev/loop200: detach failed: No such device or address 00:45:05.869 + rm -rf /var/tmp/ceph 00:45:05.869 17:23:06 spdkcli_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:45:05.869 + set -e 00:45:05.869 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:45:05.869 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:45:05.869 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:45:05.869 + base_dir=/var/tmp/ceph 00:45:05.869 + mon_ip=127.0.0.1 00:45:05.869 + mon_dir=/var/tmp/ceph/mon.a 00:45:05.869 + pid_dir=/var/tmp/ceph/pid 00:45:05.869 + ceph_conf=/var/tmp/ceph/ceph.conf 00:45:05.869 + mnt_dir=/var/tmp/ceph/mnt 00:45:05.869 + image=/var/tmp/ceph_raw.img 00:45:05.869 + dev=/dev/loop200 00:45:05.869 + modprobe loop 00:45:05.869 + umount /dev/loop200p2 00:45:05.869 umount: /dev/loop200p2: no mount point specified. 00:45:05.869 + true 00:45:05.869 + losetup -d /dev/loop200 00:45:05.869 losetup: /dev/loop200: detach failed: No such device or address 00:45:05.869 + true 00:45:05.869 + '[' -d /var/tmp/ceph ']' 00:45:05.869 + mkdir /var/tmp/ceph 00:45:05.869 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:45:05.869 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:45:05.869 + fallocate -l 4G /var/tmp/ceph_raw.img 00:45:05.869 + mknod /dev/loop200 b 7 200 00:45:05.869 mknod: /dev/loop200: File exists 00:45:05.869 + true 00:45:05.869 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:45:05.869 + PARTED='parted -s' 00:45:05.869 + SGDISK=sgdisk 00:45:05.869 + echo 'Partitioning /dev/loop200' 00:45:05.869 Partitioning /dev/loop200 00:45:05.869 + parted -s /dev/loop200 mktable gpt 00:45:05.869 + sleep 2 00:45:07.784 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:45:07.784 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:45:07.784 Setting name on /dev/loop200 00:45:07.784 + partno=0 00:45:07.784 + echo 'Setting name on /dev/loop200' 00:45:07.784 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:45:08.719 Warning: The kernel is still using the old partition table. 00:45:08.719 The new table will be used at the next reboot or after you 00:45:08.719 run partprobe(8) or kpartx(8) 00:45:08.719 The operation has completed successfully. 00:45:08.719 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:45:09.654 Warning: The kernel is still using the old partition table. 00:45:09.654 The new table will be used at the next reboot or after you 00:45:09.654 run partprobe(8) or kpartx(8) 00:45:09.654 The operation has completed successfully. 00:45:09.654 + kpartx /dev/loop200 00:45:09.654 loop200p1 : 0 4192256 /dev/loop200 2048 00:45:09.654 loop200p2 : 0 4192256 /dev/loop200 4194304 00:45:09.654 ++ ceph -v 00:45:09.654 ++ awk '{print $3}' 00:45:09.912 + ceph_version=17.2.7 00:45:09.912 + ceph_maj=17 00:45:09.912 + '[' 17 -gt 12 ']' 00:45:09.912 + update_config=true 00:45:09.912 + rm -f /var/log/ceph/ceph-mon.a.log 00:45:09.912 + set_min_mon_release='--set-min-mon-release 14' 00:45:09.912 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:45:09.912 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:45:09.912 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:45:09.912 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:45:09.912 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:45:09.912 = sectsz=512 attr=2, projid32bit=1 00:45:09.912 = crc=1 finobt=1, sparse=1, rmapbt=0 00:45:09.912 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:45:09.912 data = bsize=4096 blocks=524032, imaxpct=25 00:45:09.912 = sunit=0 swidth=0 blks 00:45:09.912 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:45:09.912 log =internal log bsize=4096 blocks=16384, version=2 00:45:09.912 = sectsz=512 sunit=0 blks, lazy-count=1 00:45:09.912 realtime =none extsz=4096 blocks=0, rtextents=0 00:45:09.912 Discarding blocks...Done. 00:45:09.912 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:45:09.912 + cat 00:45:09.912 + rm -rf '/var/tmp/ceph/mon.a/*' 00:45:09.912 + mkdir -p /var/tmp/ceph/mon.a 00:45:09.912 + mkdir -p /var/tmp/ceph/pid 00:45:09.912 + rm -f /etc/ceph/ceph.client.admin.keyring 00:45:09.912 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:45:09.912 creating /var/tmp/ceph/keyring 00:45:09.912 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:45:09.912 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:45:09.912 monmaptool: monmap file /var/tmp/ceph/monmap 00:45:09.912 monmaptool: generated fsid 27b4fbf3-8e47-4c9d-975d-ce949d2eba73 00:45:09.912 setting min_mon_release = octopus 00:45:09.912 epoch 0 00:45:09.912 fsid 27b4fbf3-8e47-4c9d-975d-ce949d2eba73 00:45:09.912 last_changed 2024-07-22T17:23:11.519888+0000 00:45:09.912 created 2024-07-22T17:23:11.519888+0000 00:45:09.912 min_mon_release 15 (octopus) 00:45:09.912 election_strategy: 1 00:45:09.912 0: v2:127.0.0.1:12046/0 mon.a 00:45:09.912 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:45:09.912 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:45:10.170 + '[' true = true ']' 00:45:10.170 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:45:10.170 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:45:10.170 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:45:10.170 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:45:10.170 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:45:10.170 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:45:10.170 ++ hostname 00:45:10.170 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:45:10.170 + true 00:45:10.170 + '[' true = true ']' 00:45:10.170 + ceph-conf --name mon.a --show-config-value log_file 00:45:10.170 /var/log/ceph/ceph-mon.a.log 00:45:10.170 ++ ceph -s 00:45:10.170 ++ grep id 00:45:10.170 ++ awk '{print $2}' 00:45:10.427 + fsid=27b4fbf3-8e47-4c9d-975d-ce949d2eba73 00:45:10.427 + sed -i 's/perf = true/perf = true\n\tfsid = 27b4fbf3-8e47-4c9d-975d-ce949d2eba73 \n/g' /var/tmp/ceph/ceph.conf 00:45:10.427 + (( ceph_maj < 18 )) 00:45:10.427 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:45:10.427 + cat /var/tmp/ceph/ceph.conf 00:45:10.427 [global] 00:45:10.427 debug_lockdep = 0/0 00:45:10.427 debug_context = 0/0 00:45:10.427 debug_crush = 0/0 00:45:10.427 debug_buffer = 0/0 00:45:10.427 debug_timer = 0/0 00:45:10.427 debug_filer = 0/0 00:45:10.427 debug_objecter = 0/0 00:45:10.427 debug_rados = 0/0 00:45:10.427 debug_rbd = 0/0 00:45:10.427 debug_ms = 0/0 00:45:10.427 debug_monc = 0/0 00:45:10.427 debug_tp = 0/0 00:45:10.427 debug_auth = 0/0 00:45:10.427 debug_finisher = 0/0 00:45:10.427 debug_heartbeatmap = 0/0 00:45:10.427 debug_perfcounter = 0/0 00:45:10.427 debug_asok = 0/0 00:45:10.427 debug_throttle = 0/0 00:45:10.427 debug_mon = 0/0 00:45:10.427 debug_paxos = 0/0 00:45:10.427 debug_rgw = 0/0 00:45:10.427 00:45:10.427 perf = true 00:45:10.427 osd objectstore = filestore 00:45:10.427 00:45:10.427 fsid = 27b4fbf3-8e47-4c9d-975d-ce949d2eba73 00:45:10.427 00:45:10.427 mutex_perf_counter = false 00:45:10.427 throttler_perf_counter = false 00:45:10.427 rbd cache = false 00:45:10.427 mon_allow_pool_delete = true 00:45:10.427 00:45:10.427 osd_pool_default_size = 1 00:45:10.427 00:45:10.427 [mon] 00:45:10.427 mon_max_pool_pg_num=166496 00:45:10.427 mon_osd_max_split_count = 10000 00:45:10.427 mon_pg_warn_max_per_osd = 10000 00:45:10.427 00:45:10.427 [osd] 00:45:10.427 osd_op_threads = 64 00:45:10.427 filestore_queue_max_ops=5000 00:45:10.427 filestore_queue_committing_max_ops=5000 00:45:10.427 journal_max_write_entries=1000 00:45:10.427 journal_queue_max_ops=3000 00:45:10.427 objecter_inflight_ops=102400 00:45:10.427 filestore_wbthrottle_enable=false 00:45:10.427 filestore_queue_max_bytes=1048576000 00:45:10.427 filestore_queue_committing_max_bytes=1048576000 00:45:10.427 journal_max_write_bytes=1048576000 00:45:10.427 journal_queue_max_bytes=1048576000 00:45:10.427 ms_dispatch_throttle_bytes=1048576000 00:45:10.427 objecter_inflight_op_bytes=1048576000 00:45:10.427 filestore_max_sync_interval=10 00:45:10.427 osd_client_message_size_cap = 0 00:45:10.427 osd_client_message_cap = 0 00:45:10.427 osd_enable_op_tracker = false 00:45:10.427 filestore_fd_cache_size = 10240 00:45:10.427 filestore_fd_cache_shards = 64 00:45:10.427 filestore_op_threads = 16 00:45:10.427 osd_op_num_shards = 48 00:45:10.427 osd_op_num_threads_per_shard = 2 00:45:10.427 osd_pg_object_context_cache_count = 10240 00:45:10.427 filestore_odsync_write = True 00:45:10.427 journal_dynamic_throttle = True 00:45:10.427 00:45:10.427 [osd.0] 00:45:10.427 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:45:10.427 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:45:10.427 00:45:10.427 # add mon address 00:45:10.427 [mon.a] 00:45:10.427 mon addr = v2:127.0.0.1:12046 00:45:10.427 + i=0 00:45:10.427 + mkdir -p /var/tmp/ceph/mnt 00:45:10.427 ++ uuidgen 00:45:10.427 + uuid=de0a9682-23ec-4bf2-9e71-e8f3aef9ebe4 00:45:10.427 + ceph -c /var/tmp/ceph/ceph.conf osd create de0a9682-23ec-4bf2-9e71-e8f3aef9ebe4 0 00:45:10.992 0 00:45:10.992 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid de0a9682-23ec-4bf2-9e71-e8f3aef9ebe4 --check-needs-journal --no-mon-config 00:45:10.992 2024-07-22T17:23:12.382+0000 7f2cd2454400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:45:10.992 2024-07-22T17:23:12.382+0000 7f2cd2454400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:45:10.992 2024-07-22T17:23:12.430+0000 7f2cd2454400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected de0a9682-23ec-4bf2-9e71-e8f3aef9ebe4, invalid (someone else's?) journal 00:45:10.992 2024-07-22T17:23:12.465+0000 7f2cd2454400 -1 journal do_read_entry(4096): bad header magic 00:45:10.992 2024-07-22T17:23:12.465+0000 7f2cd2454400 -1 journal do_read_entry(4096): bad header magic 00:45:10.992 ++ hostname 00:45:10.992 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:45:12.365 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:45:12.365 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:45:12.623 added key for osd.0 00:45:12.623 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:45:12.881 + class_dir=/lib64/rados-classes 00:45:12.881 + [[ -e /lib64/rados-classes ]] 00:45:12.881 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:45:13.139 + pkill -9 ceph-osd 00:45:13.139 + true 00:45:13.139 + sleep 2 00:45:15.667 + mkdir -p /var/tmp/ceph/pid 00:45:15.667 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:45:15.667 2024-07-22T17:23:16.743+0000 7f102c852400 -1 Falling back to public interface 00:45:15.667 2024-07-22T17:23:16.785+0000 7f102c852400 -1 journal do_read_entry(8192): bad header magic 00:45:15.667 2024-07-22T17:23:16.785+0000 7f102c852400 -1 journal do_read_entry(8192): bad header magic 00:45:15.667 2024-07-22T17:23:16.794+0000 7f102c852400 -1 osd.0 0 log_to_monitors true 00:45:16.234 17:23:17 spdkcli_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:45:17.169 pool 'rbd' created 00:45:17.445 17:23:18 spdkcli_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:45:21.668 17:23:22 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:45:21.668 timing_exit spdkcli_create_rbd_config 00:45:21.668 00:45:21.668 timing_enter spdkcli_check_match 00:45:21.668 check_match 00:45:21.668 timing_exit spdkcli_check_match 00:45:21.668 00:45:21.668 timing_enter spdkcli_clear_rbd_config 00:45:21.668 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:45:22.233 Executing command: [' ', True] 00:45:22.233 17:23:23 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:45:22.233 17:23:23 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:45:22.233 17:23:23 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:45:22.233 + base_dir=/var/tmp/ceph 00:45:22.233 + image=/var/tmp/ceph/ceph_raw.img 00:45:22.233 + dev=/dev/loop200 00:45:22.233 + pkill -9 ceph 00:45:22.233 + sleep 3 00:45:25.515 + umount /dev/loop200p2 00:45:25.515 + losetup -d /dev/loop200 00:45:25.515 + rm -rf /var/tmp/ceph 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:45:25.515 17:23:26 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:45:25.515 17:23:26 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 127561 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 127561 ']' 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 127561 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@953 -- # uname 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127561 00:45:25.515 killing process with pid 127561 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127561' 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@967 -- # kill 127561 00:45:25.515 17:23:26 spdkcli_rbd -- common/autotest_common.sh@972 -- # wait 127561 00:45:28.047 17:23:29 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:45:28.047 17:23:29 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:45:28.047 17:23:29 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:45:28.047 + base_dir=/var/tmp/ceph 00:45:28.047 + image=/var/tmp/ceph/ceph_raw.img 00:45:28.047 + dev=/dev/loop200 00:45:28.047 + pkill -9 ceph 00:45:28.047 + sleep 3 00:45:31.333 + umount /dev/loop200p2 00:45:31.333 umount: /dev/loop200p2: no mount point specified. 00:45:31.333 + losetup -d /dev/loop200 00:45:31.333 losetup: /dev/loop200: detach failed: No such device or address 00:45:31.333 + rm -rf /var/tmp/ceph 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 127561 ']' 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 127561 00:45:31.333 Process with pid 127561 is not found 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 127561 ']' 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 127561 00:45:31.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (127561) - No such process 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@975 -- # echo 'Process with pid 127561 is not found' 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:31.333 17:23:32 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:31.333 ************************************ 00:45:31.333 END TEST spdkcli_rbd 00:45:31.333 ************************************ 00:45:31.333 00:45:31.333 real 0m33.128s 00:45:31.333 user 1m1.077s 00:45:31.333 sys 0m1.735s 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:31.333 17:23:32 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:45:31.333 17:23:32 -- common/autotest_common.sh@1142 -- # return 0 00:45:31.333 17:23:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:45:31.333 17:23:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:45:31.333 17:23:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:45:31.333 17:23:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:45:31.333 17:23:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:45:31.333 17:23:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:45:31.333 17:23:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:45:31.333 17:23:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:31.333 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:45:31.333 17:23:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:45:31.333 17:23:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:31.333 17:23:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:31.333 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:45:32.267 INFO: APP EXITING 00:45:32.267 INFO: killing all VMs 00:45:32.267 INFO: killing vhost app 00:45:32.267 INFO: EXIT DONE 00:45:32.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:32.834 Waiting for block devices as requested 00:45:32.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:45:32.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:33.769 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:45:33.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:33.769 Cleaning 00:45:33.769 Removing: /var/run/dpdk/spdk0/config 00:45:33.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:33.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:33.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:33.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:33.769 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:33.769 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:33.769 Removing: /var/run/dpdk/spdk1/config 00:45:33.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:33.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:33.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:33.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:33.769 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:33.769 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:33.769 Removing: /dev/shm/iscsi_trace.pid78431 00:45:33.769 Removing: /dev/shm/spdk_tgt_trace.pid59060 00:45:33.769 Removing: /var/run/dpdk/spdk0 00:45:33.769 Removing: /var/run/dpdk/spdk1 00:45:33.769 Removing: /var/run/dpdk/spdk_pid123525 00:45:33.769 Removing: /var/run/dpdk/spdk_pid123842 00:45:33.769 Removing: /var/run/dpdk/spdk_pid123892 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124000 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124074 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124146 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124343 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124392 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124426 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124459 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124497 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124608 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124657 00:45:33.769 Removing: /var/run/dpdk/spdk_pid124889 00:45:33.769 Removing: /var/run/dpdk/spdk_pid125213 00:45:33.769 Removing: /var/run/dpdk/spdk_pid125475 00:45:33.769 Removing: /var/run/dpdk/spdk_pid126362 00:45:33.769 Removing: /var/run/dpdk/spdk_pid126425 00:45:33.769 Removing: /var/run/dpdk/spdk_pid126725 00:45:33.769 Removing: /var/run/dpdk/spdk_pid126917 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127093 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127224 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127338 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127417 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127447 00:45:33.769 Removing: /var/run/dpdk/spdk_pid127561 00:45:33.769 Removing: /var/run/dpdk/spdk_pid58838 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59060 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59281 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59385 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59441 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59580 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59598 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59752 00:45:33.769 Removing: /var/run/dpdk/spdk_pid59947 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60149 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60254 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60357 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60482 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60582 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60616 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60658 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60726 00:45:33.769 Removing: /var/run/dpdk/spdk_pid60838 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61290 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61365 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61443 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61468 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61623 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61639 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61799 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61815 00:45:33.769 Removing: /var/run/dpdk/spdk_pid61885 00:45:34.027 Removing: /var/run/dpdk/spdk_pid61908 00:45:34.027 Removing: /var/run/dpdk/spdk_pid61972 00:45:34.027 Removing: /var/run/dpdk/spdk_pid61990 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62183 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62225 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62306 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62387 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62429 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62507 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62550 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62600 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62648 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62699 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62745 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62792 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62844 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62885 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62937 00:45:34.027 Removing: /var/run/dpdk/spdk_pid62984 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63030 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63081 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63129 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63175 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63222 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63274 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63318 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63373 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63420 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63467 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63555 00:45:34.027 Removing: /var/run/dpdk/spdk_pid63675 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64026 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64057 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64088 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64141 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64146 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64175 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64203 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64220 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64275 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64297 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64366 00:45:34.027 Removing: /var/run/dpdk/spdk_pid64460 00:45:34.027 Removing: /var/run/dpdk/spdk_pid65239 00:45:34.027 Removing: /var/run/dpdk/spdk_pid67082 00:45:34.027 Removing: /var/run/dpdk/spdk_pid67377 00:45:34.027 Removing: /var/run/dpdk/spdk_pid67700 00:45:34.027 Removing: /var/run/dpdk/spdk_pid67974 00:45:34.027 Removing: /var/run/dpdk/spdk_pid68654 00:45:34.027 Removing: /var/run/dpdk/spdk_pid73290 00:45:34.027 Removing: /var/run/dpdk/spdk_pid77285 00:45:34.027 Removing: /var/run/dpdk/spdk_pid78062 00:45:34.027 Removing: /var/run/dpdk/spdk_pid78102 00:45:34.027 Removing: /var/run/dpdk/spdk_pid78431 00:45:34.027 Removing: /var/run/dpdk/spdk_pid79822 00:45:34.027 Removing: /var/run/dpdk/spdk_pid80223 00:45:34.027 Removing: /var/run/dpdk/spdk_pid80280 00:45:34.027 Removing: /var/run/dpdk/spdk_pid80674 00:45:34.027 Removing: /var/run/dpdk/spdk_pid83128 00:45:34.027 Clean 00:45:34.027 17:23:35 -- common/autotest_common.sh@1451 -- # return 0 00:45:34.027 17:23:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:45:34.027 17:23:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:34.027 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:45:34.285 17:23:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:45:34.285 17:23:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:34.285 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:45:34.285 17:23:35 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:34.285 17:23:35 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:45:34.285 17:23:35 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:45:34.285 17:23:35 -- spdk/autotest.sh@391 -- # hash lcov 00:45:34.285 17:23:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:45:34.285 17:23:35 -- spdk/autotest.sh@393 -- # hostname 00:45:34.285 17:23:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:45:34.543 geninfo: WARNING: invalid characters removed from testname! 00:46:06.663 17:24:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:06.663 17:24:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:08.604 17:24:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:11.888 17:24:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:14.416 17:24:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:16.947 17:24:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:19.478 17:24:20 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:19.478 17:24:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:19.478 17:24:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:19.478 17:24:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:19.478 17:24:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:19.478 17:24:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.478 17:24:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.478 17:24:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.478 17:24:21 -- paths/export.sh@5 -- $ export PATH 00:46:19.478 17:24:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:19.478 17:24:21 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:46:19.478 17:24:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:46:19.478 17:24:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721669061.XXXXXX 00:46:19.478 17:24:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721669061.S58tas 00:46:19.478 17:24:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:46:19.478 17:24:21 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:46:19.478 17:24:21 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:46:19.478 17:24:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:46:19.479 17:24:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:46:19.479 17:24:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:46:19.479 17:24:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:46:19.479 17:24:21 -- common/autotest_common.sh@10 -- $ set +x 00:46:19.479 17:24:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:46:19.479 17:24:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:46:19.479 17:24:21 -- pm/common@17 -- $ local monitor 00:46:19.479 17:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.479 17:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:19.479 17:24:21 -- pm/common@25 -- $ sleep 1 00:46:19.479 17:24:21 -- pm/common@21 -- $ date +%s 00:46:19.479 17:24:21 -- pm/common@21 -- $ date +%s 00:46:19.736 17:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721669061 00:46:19.736 17:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721669061 00:46:19.736 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721669061_collect-vmstat.pm.log 00:46:19.736 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721669061_collect-cpu-load.pm.log 00:46:20.670 17:24:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:46:20.670 17:24:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:46:20.670 17:24:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:46:20.670 17:24:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:46:20.670 17:24:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:46:20.670 17:24:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:46:20.670 17:24:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:46:20.670 17:24:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:20.670 17:24:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:20.670 17:24:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:20.670 17:24:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:46:20.670 17:24:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:20.670 17:24:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:20.670 17:24:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:20.670 17:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.670 17:24:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:46:20.670 17:24:22 -- pm/common@44 -- $ pid=130058 00:46:20.670 17:24:22 -- pm/common@50 -- $ kill -TERM 130058 00:46:20.670 17:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:20.670 17:24:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:46:20.670 17:24:22 -- pm/common@44 -- $ pid=130059 00:46:20.670 17:24:22 -- pm/common@50 -- $ kill -TERM 130059 00:46:20.670 + [[ -n 5272 ]] 00:46:20.670 + sudo kill 5272 00:46:20.681 [Pipeline] } 00:46:20.701 [Pipeline] // timeout 00:46:20.709 [Pipeline] } 00:46:20.727 [Pipeline] // stage 00:46:20.732 [Pipeline] } 00:46:20.752 [Pipeline] // catchError 00:46:20.760 [Pipeline] stage 00:46:20.761 [Pipeline] { (Stop VM) 00:46:20.773 [Pipeline] sh 00:46:21.050 + vagrant halt 00:46:25.234 ==> default: Halting domain... 00:46:31.831 [Pipeline] sh 00:46:32.122 + vagrant destroy -f 00:46:36.310 ==> default: Removing domain... 00:46:36.577 [Pipeline] sh 00:46:36.852 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:46:36.863 [Pipeline] } 00:46:36.882 [Pipeline] // stage 00:46:36.887 [Pipeline] } 00:46:36.902 [Pipeline] // dir 00:46:36.910 [Pipeline] } 00:46:36.923 [Pipeline] // wrap 00:46:36.928 [Pipeline] } 00:46:36.938 [Pipeline] // catchError 00:46:36.946 [Pipeline] stage 00:46:36.948 [Pipeline] { (Epilogue) 00:46:36.962 [Pipeline] sh 00:46:37.240 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:45.403 [Pipeline] catchError 00:46:45.405 [Pipeline] { 00:46:45.420 [Pipeline] sh 00:46:45.700 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:45.700 Artifacts sizes are good 00:46:45.711 [Pipeline] } 00:46:45.729 [Pipeline] // catchError 00:46:45.741 [Pipeline] archiveArtifacts 00:46:45.750 Archiving artifacts 00:46:46.633 [Pipeline] cleanWs 00:46:46.645 [WS-CLEANUP] Deleting project workspace... 00:46:46.645 [WS-CLEANUP] Deferred wipeout is used... 00:46:46.651 [WS-CLEANUP] done 00:46:46.653 [Pipeline] } 00:46:46.673 [Pipeline] // stage 00:46:46.679 [Pipeline] } 00:46:46.697 [Pipeline] // node 00:46:46.703 [Pipeline] End of Pipeline 00:46:46.731 Finished: SUCCESS